Hacker News new | past | comments | ask | show | jobs | submit login
Intel's disruption is now complete (jamesallworth.medium.com)
894 points by davnicwil on Nov 14, 2020 | hide | past | favorite | 627 comments



As someone who worked on Intel's phone chip: we definitely didn't win it. We fucked it up twelve ways to Sunday. Why: giant egos. There were turf wars between Austin, Santa Clara and Israel over who would design it, and the team that won out had long since lost its best principle engineers and had no clue how to spin the architecture to meet the design win. Otellini's hindsight hedge is pure spin: we knew the landing zone, we just didn't know how to get there. And the aforementioned turf war guaranteed we didn't get access to other teams' talent. I'm bitter because it was a really fun team when I moved from Motorola to Intel Austin, and then it just corroded over political battles.


I have a theory about why turf wars like this happens: the more successful and wealthy an entity, the greater opportunity there is for a manager to take wealth instead of make it. When you’re scrappy and broke, the only path to success is to make the company successful. When the company is rich and multitudinous, an individual can gain more from politics and turf wars rather than actually trying to push an already high enterprise value higher. The “maker:taker” opportunity ratio changes, and so does the type of personality the organization attracts.


As I've linked to many times on HN:

The Iron Law of Bureaucracy

> Pournelle's Iron Law of Bureaucracy states that in any bureaucratic organization there will be two kinds of people":

> First, there will be those who are devoted to the goals of the organization. Examples are dedicated classroom teachers in an educational bureaucracy, many of the engineers and launch technicians and scientists at NASA, even some agricultural scientists and advisors in the former Soviet Union collective farming administration.

> Secondly, there will be those dedicated to the organization itself. Examples are many of the administrators in the education system, many professors of education, many teachers union officials, much of the NASA headquarters staff, etc.

> The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization.

https://www.jerrypournelle.com/reports/jerryp/iron.html

This (and other organizational failures and technological innovations) is why there will always be startups and small businesses eventually where there was once a big entrenched business or two (or sometimes three).

There is also, of course, whole new industries and niches to be created out of nothing, value and wealth is never a finite pile. It's mostly a matter of human effort to find and break out these new areas. Sometimes just reinventing things as the old talent grows old and dies off, providing value in learning history.


I had always heard that there are two 'Iron Laws of Bureaucracy':

1. The only thing that matters is the budget.

2. Bureaucracies only grow, never shrink, and the only thing you can control is the rate of growth.

Thanks for sharing another one with us!


This reminds me of an observation about the British Empire that the bureaucracy was at its most vast the day it was abolished.


Yes, that comes from an essay written in 1955 by Parkinson:

http://doc.cat-v.org/economics/parkinsons-law/the-economist-...

What we have to note is that the 2,000 Admiralty officials of 1914 had become the 3,569 of 1928; and that this growth was unrelated to any possible increase in their work. The Navy during that period had diminished, in point of fact, by a third in men and two-thirds in ships. Nor, from 1922 onwards, was its strength even expected to increase, for its total of ships (unlike its total of officials) was limited by the Washington Naval Agreement of that year.


HBO's The Wire captures this so well across a variety of organizations.

Edit: thanks OP for continuing to post this link, I as well found it enlightening :)


That sounds like an ironclad law. But is there empirical evidence for it? And a rational explanation behind it? I mean it's very strict as a rule: "in EVERY case the second group ... ".

Strong claims like that require strong evidence and a theory of WHY that would (always) be the outcome.

I'm thinking of society as large. That is an organization too. Are we doomed by bureaucracy?


When management becomes separated from both the people at the coalface (the people actually doing the jobs the organisation needs to do) and the founders/president/CEO, then behaviour changes. In most organisations, this is at 3 layers of management (not including the actual workers and the top management layer, so 5 layers deep for the whole organisation). After this point, presentation and political acumen outweigh all other factors for the middle management layer, because all other factors can always be "spun" to look good, or blamed on a scapegoat, etc. - there's no direct link between what the middle-manager did and what the result was, so presentation matters more.

Once that happens, the organisation will promote people who do politics well. It's only a matter of time before the entire organisation is focused on internal politics (apart from the people actually doing the work, who become pawns in political moves). As the organisation grows and the layers expand and more layers of management become disconnected, it gets worse.

In a company, eventually the company will get disrupted and die off. In a government that's not a thing, and it'll just keep expanding and playing politics. This gets worse for government departments headed by a politician, because the politician is very focused on getting something they can boast about to their electorate in the few years they have in the department. And they're usually very familiar with the kinds of political games being played.

Source: We studied this in my MBA, it's a known thing. If I still had my textbooks I could dig out a reference.


Would be interested in hearin' more ...


Think of it in terms of thermodynamics. The first group requires extra energy relative to the second group to perform roles of leadership, since those are secondary goals. The second group is instead naturally drawn to the roles of direction of the group. Thus, the lowest energy state will be one where the second group is in charge. Over time, things will gravitate to that state as it is the global minimum.


> But is there empirical evidence for it? And a rational explanation behind it?

I don't have empirical evidence for it, but I can take a stab at a rational explanation.

The second group faces no opposition in their pursuit for control of the organization and its processes.

At best, members of that group will fight among themselves, but certainly won't find resistance from the first group.

Control over various parts of the organization is therefore slowly given to one or more members of the second group.


Shouldn't the senior leadership team be providing resistance against the bureaucrats? This doesn't always happen, but I think it does sometimes.


Eventually the bureaucrats become the senior leadership and then they protect themselves.

This grows like a cancer: it starts from the bottom of the pool with people with no much skills, but looking for promotions while people with skills are looking for work results. In some very large companies I worked in (or with), techies focus on work while less competent people become managers; techies regard managers initially as admins that do work techies don't want to, but then the managers take over the organization and make it rot. This is initially far from senior leadership's sight, when it becomes visible it means most of the organization is already affected and it is too late. In most cases HR is helping with this because HR is completely disconnected and not understanding techies and the power dynamics ("those geeks nobody understands") and side with the bureaucrats because HR is also a form of bureaucracy (you don't get in HR is you are a brilliant STEM graduate).


Workers may, indeed, offer resistance against the second group.


They may offer resistance to the control of the second group but they don't fight as hard to take that control (because they focus on doing the work) so are eventually overpowered


By the time when the administration takes hold(due to legal, management or investor requirements) - workers are less than likely to care.

I work in one of those organizations (US legal requirements make it a nightmare)... I even have stock incentives, and I couldn't care less.


There will be a lot of datapoints supporting that “law”... the problem is that it’s subjective.

Everyone hates “education bureaucrats”, but if the mission is educating children, at some level the state of the institution is critical to that mission.


I don’t really believe you could do such a thing empirically. I mean it just depends on scale as you indicated. I have never been a fan of ‘social’ sciences either.

Just like Christensen’s Dilemma books are mostly just anecdotes combined with patterns in history. As many such business books are.

But in my short time on this planet I’ve seen it in countless forms where I strongly believe it deserves such a title as “iron law”.


I don't have empirical evidence, but it mirrors the 2nd law of thermodynamics (entropy always increasing). Most "concentrated" things tend to dissipate away.

> Are we doomed by bureaucracy?

When empires collapse, you get small offshoots starting up somewhere nearby. If you search "tree life cycle" on google images you will see the analogy I'm trying to get at :)


Yes. This is why Moon, Mars, etc colonies are important - it is like small startups wrt. BigCo which is our Earth civilization reaching the state of disfunction.


You may have posted this many times, but this is my first time seeing and it explains so much about what is going on right now in my life. It doesn’t give me hope, but it gives some clarity around what otherwise seems like dysfunction. Thank you!


But isn't apple that's overpassing Intel an equally big organization? How is it exempt from the rules of bureaucracy?


Apple is actually bigger by employee count, however about half (I think) are retail employees in the stores.

So yes - a very large organisation. However Apple's chip design teams are much, much smaller than that. And because Apple mostly doesn't do chip design those teams are likely able to work like a startup, without much internal politics or in-fighting. Where as all of Intel's 100,000+ employees are basically devoted to chip design and manufacture.

Apple as a whole, though, is starting to look rather sick from the same sort of problems that Intel has. Under Jobs there was an energy and clarity of purpose that's been lacking for some time. The drift in Apple's core businesses are obvious. The iPhone has been stuck in turgid incrementalism for a long time now, and has already been largely "disrupted" by Android which sold into cheaper markets and used that to fund R&D budgets that match Apple's. We don't think of it as disruption because Android arrive so soon after the iPhone and thus there was no obvious delayed "disruption event", Apple just had their market share capped by the refusal to compete on price, which is why Apple fans have for years been forced to make arguments about why Apple is successful because they take a larger profit margin than others.

Their core Mac business has also been adrift for many years now. I and many others actively avoid trying to upgrade because things frequently get worse rather than better. The first decade of the century saw constant innovation in the Mac business, after all their best people were reallocated to the iPhone and iPad, quality entered a long period of decline. macOS releases struggled with regressions and rarely introduced new innovations worth caring about. Their apps and hardware have also stalled with own goals and unforced errors like the keyboard fiasco <looks at butterfly keys with holes in the plastic from ordinary levels of use>.

Developer experience and docs are another area of problems, there was a good rant about it posted here the other day.

All these are due to organisational rot, and most obviously, a form of in-fighting between iOS / macOS in which the iWorld got the best people and resources, leaving macWorld to pathetically try and steal things from the other group occasionally, regardless of whether it made sense for a laptop context or not.


It's not. It has in the past shown these exact traits. And will most probably also do so in the future.


I get the idea that Apple's culture of secrecy applies just as much to internal projects. So while the total headcount is huge, the headcount involved in any given project is small, and unlikely to be affected by the rest of the organisation because they don't know about the project.


Thanks for making me aware of this. I’m interested in finding the derivation of this law. Intuitively it sounds sort of like Grisham’s law were bad money drives out good, but it can’t be quite that simple, can it? Is it more of an empirical thing? I know I am being lazy just asking but a quick search turns up only statements of the law. I can make up my own thinking behind it but I’d like to get it from Pournelle if possible. (Aside: Rothbard was great at this sort of logic chaining I think) Also nice rabbit hole generally here!


Turf wars are a symptom of thinking you are smarter than everyone else ... and what group makes the greatest effort in this counter-productive egotistical exercise? Silicon design engineers.


Wouldn't most employees fall under the first category, and managers and executives the next? And employees are paid to carry out goals. while employers take care of the organization and set the goals. With this modeling of the situation, the second group always had control to begin with. They're management.

And there's always corruption and politics where there are humans, be it in the mailroom or the boardroom.


> Wouldn't most employees fall under the first category,

Tell that to any college/university in the western world!

There's a reason tuitions have risen so high and it's largely been cited as the ratio of admins to teachers as being at least 10:1 when it used to be much closer.

I personally blame cheap gov subsidized credit for this quickly forming trend, which only took ~2 decades to have a devastating effect on students who come out of school without a degree which can somehow pay back such debt (which we in STEM are fortunate and I personally dropped out early).


This law is the reason term-limits and liquid-democracy are such strong tools.

Term limits are best implemented where someone must take on a different job every x years. This doesn't solve the problem of entrenched vampiric bureaucrats entirely, but it at least pushes them towards the chopping block.

Liquid democracy is a type of democracy where people give their voting power for particular issues to whoever they trust most for that topic. Then that person may pass the cumulative votes onto someone they trust and believe in. The point of liquid democracy is to empower people who are actual experts in a subject. Rather than having laymen vote in ways that just empower the most socially adept.

The entire skillset involved with gaining power is so time-consuming, it's incredibly rare for someone to have it and an actual technical skill at the same time.


Term limits are a terrible idea. They reduce the power and expertise of elected representatives in favor of the permanent government, civil service and lobbyists. On top of that they encourage corrupt behavior as you know you’re not getting elected so there is no reason not to beyond the threat of legal consequences.

> Disentangling Accountability and Competence in Elections: Evidence from U.S. Term Limits

> We exploit variation in U.S. gubernatorial term limits across states and time to empirically estimate two separate effects of elections on government performance. Holding tenure in office constant, differences in performance by reelection-eligible and term-limited incumbents identify an accountability effect: reelection-eligible governors have greater incentives to exert costly effort on behalf of voters. Holding term-limit status constant, differences in performance by incumbents in different terms identify a competence effect: later-term incumbents are more likely to be competent both because they have survived reelection and because they have experience in office. We show that economic growth is higher and taxes, spending, and borrowing costs are lower under reelection- eligible incumbents than under term-limited incumbents (accountability), and under reelected incumbents than under first-term incumbents (competence), all else equal. In addition to improving our understanding of the role of elections in representative democracy, these findings resolve an empirical puzzle about the disappearance of the effect of term limits on gubernatorial performance over time.

https://dash.harvard.edu/bitstream/handle/1/9639960/Alt_Dise...


A: Power corrupts - is not a joke. Reelection doesn't prevent corruption at all. It's a complete non-factor.

B: Elected officials are supposed to be the overseeing representatives, rather than functional executives.(think board vs C-suite). Elected executive is overall a poor idea.


The person you're responding to provided a published study from Harvard as their evidence. What's your evidence he's wrong?


It's not just one study, either. This isn't the area of political science I work in. Amongst those who do, it's an essentially universally-held belief that term limits for legislators are empirically a bad idea.

It's a shame. I like the idea of not concentrating power.


FFS! Incumbency is a complete non-factor in corruption. It's always the ability to increase your power, while in office.

Three countries with fairly open and competitive elections, and very high levels of corruption and high levels of incumbency:

https://www.transparency.org/en/countries/india

https://www.transparency.org/en/countries/turkey

https://www.transparency.org/en/countries/hungary

Then there are these least corrupt countries, that elect new people all the time:

https://www.transparency.org/en/countries/finland

https://www.transparency.org/en/countries/new-zealand

Oh... and let's not forget, that the study is about how efficient the governors are... not about corruption at all.


What about the people who project themselves as being experts in a very believable fashion but are actually not experts? Human nature would tend to cause these people to double down on their own self delusion rather than admit they don’t actually know what they are talking about. How do we guard against that?


Whiteboard interviews! Wait, no


I have a similar theory as to why rapidly growing companies become toxic environments (of which politics and turf wars are a symptom). When a company is small and has a low profile, its employees are in it for the company. They believe in the product or service that the company is creating, and but into its mission.

When the company becomes successful and starts to grow, it starts to attract people who don't really care about the company and its mission—they are in it only for themselves. They only want to improve their lot, and are willing to play politics and backstab as required to get ahead..

If the company is only growing slowly, they can weed these types out during the interview process. But if they are growing quickly, they don't have time to interview properly and slowly and surely the company is transformed from a cooperative, mission-driven organization to one where employees are fighting against themselves.

Unfortunately these politically minded types are the ones who quickly rise up the ranks, and once most of the top areas of the company are populated by assholes, there is no cure.


> When a company is small and has a low profile, its employees are in it for the company. They believe in the product or service that the company is creating, and but into its mission.

In the history of the world, 99% of people has been "in it" for the salary. Who gives a rat's ass about the company or the product?


I think just avoiding the people "willing to play politics and backstab" can be a big improvement (for employee happiness and the bottom line / shareholders)


I’ve been thinking similar thoughts about politics. Normal countries can turn into superpowers, and then it looks (from my non-politically astute eyes) as if politicians fight for power over the nation at the expense of the nation itself, and in so doing turn the superpower back into a normal nation.


An additional set of guesses to add on: some individuals are temperamentally better at the extraction than production, prefer it, are more able to exercise deceit, and have less regret or self shame.


That's obvious but doesn't explain why some organizations are vulnerable to it while others aren't.


Wasn’t meaning to be cheeky. Have been thinking about this for a few years and those are observations that took a while.

There’s analogies to building institutions in developing countries. See the constitution in the Ukraine (I think) or South Africa. Ginsberg commented publicly and Brayer agreed with (at a talk I attended) from a “source code” perspective those constitutions are superior to the American one. Sort of v2 efforts where you can adopt best practices. Yet most people would agree those younger nations don’t have stronger institutions and than the US.

So there has to be both an individual and organizational component.

This next comment is completely unbaked and probably terribly phrased, so please read it gently: in addition to the rest of the comments, I believe there is also (and I understand this is a terrible analogy) a “standby passive observer” mechanism mentality similar to how Hitler came to power. Conceptually, people that get hired externally should maybe be >50% of any large organization maybe. Those employees need to see repeated instances of malicious political acts go by (“hey aren’t we going to do something here”) and learn that sort of complicit passivity.

That’s probably a terrible analogy, but notice that there seems to be nothing the three or four people in this threat that worked at Intel could do to alter the company’s direction (obviously) but more importantly there’s also a tone of dread and finality in their words, and they seem to have all chosen to have left.

So, it seems, like nation building, a complicated problem to understand and solve.


Part of this comment made me think of Albert Hirschman's 'Exit, Voice, and Loyalty' [1] - at least as I have understood the argument, comparing the 'standby passive observer' to the 'loyalty' standpoint, and the commenters as having chosen the 'exit' option. Hirschman's book is still on my reading list though, together with his 'The Passions and the Interests' - my impression of his work is mostly based on the episodes of Alphachat where they discussed his life and work [2,3,4]

[1] https://en.wikipedia.org/wiki/Exit,_Voice,_and_Loyalty

[2] https://www.ft.com/content/d1f5d43e-9cef-41d2-a651-f59a40e47...

[3] https://www.ft.com/content/798e6641-ecf9-480e-906e-8ee756296...

[4] https://www.ft.com/content/33809fbc-c999-41c5-97f6-a5d1f33d0...


Thank you!!! Will Amazon. This is one of those back burner questions that festers and your material seems like it might lead to an unlock! Appreciate it.


As well as the formal, visible institutions like constitutions or legal frameworks there are a lot of invisible ones like cultural norms.

In a lot of countries the most powerful institution is "family first" (sometimes extending that to ethnic or religious in-group second).

The (currently) successful countries got that way with a different norm, "same rules for everyone".

I'm viewing the emphasis on family first in recent Disney/Pixar movies with increasing concern. Sure it sells better in China, but at what long-term cost?


>In a lot of countries the most powerful institution is "family first" (sometimes extending that to ethnic or religious in-group second).

>The (currently) successful countries got that way with a different norm, "same rules for everyone".

Which successful countries are these? As far as I can tell, the richer, more resourceful, established tribes don't follow the same rules as everyone. They might not be as blatantly corrupt as others, or the corruption may be more higher level with more plausible deniability, but "family first" is human nature.


You have any concrete examples of both type of countries?


"Same rules for everyone" countries: the archetype would be England after the civil war, where it became established that it didnt' matter if you were related to the King (or were the King), the law applied to you also.

(Of course there are varying values of "everyone." Most countries pretty much excluded half of the adult population until recently.)

For "family first" countries: this is the default mode for humans, so pretty much every country at one point or another. Nigeria and India would be two ountries where it's important to be related to the right people. India's caste system further reduces opportunities for many, perhaps most, people. For more extreme examples: Yemen, Afghanistan.


“Same rules for everyone” resonates very strongly here in New Zealand. There is a book-length study [1] on our national obsession with ‘fairness’ (contrasted with the USA and ‘freedom’).

[1] https://www.goodreads.com/book/show/12112539-fairness-and-fr...


There's the other extreme - state above all.


Constitutions are just pieces of paper. If people don't uphold them - they aren't worth the paper...

You could have a constitution that says - everyone is free and we decide how to govern ourselves voluntarily.... And still get a country like Switzerland.


Most organizations, over a period of time are destined to decline.

My views on the subject : https://realminority.wordpress.com/

Preventing the decline is possible, but is _continuous_ hard work.


Doesn't have to be a superpower. Look at all the autocrats that set themselves up in various African states. If anything, the success of a power grab depends on weak institutions leading to poor incentives to be "well behaved". This could be at a large established corporation, or a nation-state in disarray after a revolution.


I think both the Roman Empire and imperial China went through several cycles of this.


That's more dependent on a firm's monopoly power than scale or wealth.

If you're in a competitive market you're forced to innovate.

If you're a monopolist you can just sit there and extract wealth. Which Intel was for a while.

By the way, your little theory has support and a name, it's the resource curse in political economics.


Also wanted to mention the resource curse.

When billions are raining from the sky, whether you do good work or not, the rational plan is to fight over those free resources, rather than work to create new ones.


Aka "Dutch disease"

> coined in 1977 by The Economist to describe the decline of the manufacturing sector in the Netherlands after the discovery of the large Groningen natural gas field in 1959

https://en.wikipedia.org/wiki/Dutch_disease


I think scale might matter too. Companies are partly public goods problems. Each individual's hard work benefits the whole company. As a result they will undersupply hard work. If the company is size N, they only share 1/N of the total benefit they provide. So, the public goods problem bites harder when companies get big - even holding the level of competition in the market constant.


> If the company is size N, they only share 1/N

The graph of the distribution for value per person must be wierd, and definitely not flat. Maybe based on a power law, but I am unsure how to model the people that provide negative value, for example bad managers.


Well, the question is how much of the output they get :-)

Though actually note that the classic Mancur Olson result is "the exploitation of the strong by the weak". That is, the person who has the largest share of output contributes disproportionately much to the public good.

Simple proof: write individual i's marginal benefit from the company's total effort X as

s_i f(X) - x_i

where s_i is i's share of the output f(X), and x_i is i's own effort. Taking everyone else's effort as given, i will equate marginal benefit of own effort to marginal cost, so that total effort solves

s_i f'(X) = 1

equivalently

f'(X) = 1/s_i

Now suppose the person with the largest share satisfies the above. Then everyone with a smaller share must have

f'(X) < 1/s_i

and therefore, if they are contributing more than the minimum possible, they have an incentive to reduce their contribution. So, this extreme version has that only the largest "shareholder" does any work. (A more realistic model has a convex cost of effort, so smaller shareholders do something, but they still contribute less than their output share.)


Your assumption is that the organisation is a cooperative - the employees get a share of the output of the organisation.

Intel is not a cooperative.

Fungible employees (the majority?) are paid a market clearing price (salary) for their effort. The company can derive much more value from an employee and the employee earns very little of the excess (perhaps bonuses to align working incentives).


Supposedly, in knowledge work like programming, the distribution follows a pareto distribution: for n people, sqrt(n) produce half the productive output.


Countries with lots of minerals in Africa/ South America can tell you stories about the resource curse.


The 'resource curse' might have the causality wrong though in many circumstances.

Maybe those countries would find themselves in a similarly bad state (or worse) without those resources. Maybe their bad state is due to historical reasons - being former colonies with very little power on the world stage - and resources just couldn't help much.


The resources curse really is about staying bad, not turning bad. We must realize that all countries 300-400 years ago were 'bad' by out perspective.

Countries with a dominant resources that only found them when they were already developed don't suffer from the 'resource curse'.

In such cases not having resources doesn't mean you turn into Switzerland or Canada rather it means that it will be harder for one dominant dictator to hold on to power. In some African countries that are now proto-democracies, dictators simply couldn't hold on to power without easily available resource base.

Bruce Bueno de Mesquita work in 'The Logic of Political Survival' and ' The Dictator's Handbook' is quite interesting.


Having a certain superpower send funds, training, and arms to right wing militias and established dictators certainly helps maintain the resource curse.


this is a common trope - but Canada has lots and seems ok.

Also the USA is the poster child for this; it's arguable that without the California gold rush there would be no USA as this is what is thought to have capitalised the federal government sufficiently to impose order on the west and fight the south.


This seems true, but I guess I also have more empathy for decision-makers: to me it seems like it is far from obvious who should take on which projects. Pretty much by definition, the organization is structured so that its current projects have sensible homes, but when a totally new project comes up, it is unlikely to fit perfectly anywhere in that structure. If it did, it wouldn't be totally new!

None of the options for dealing with this seem great. I've seen "internal incubator" type things which are designed to come up with new ideas, but honestly I'm not a fan of these. The structure incentivizes prototyping new "left field" ideas, but those ideas are likely not to gain any traction, and if they do, it is likely to be a mess to transition them into real projects within the more normal operations of the organization. And it does nothing for projects that are clearly important from inception, even if new; the "left field" incubator is not the right place to put something that isn't a crazy idea being tested but something known to be important.


The problem I've observed is also that (for performance / reward purposes), managers lie as hell when it comes to their team performance. The bigger the company, the less actual signal is received by upper managers because everyone doctors the hell out of this numbers. It might be impossible to even KNOW who the best team in your company is.


Yes. In fact, I would argue it should not be possible to have this "the best team" - if teams in different parts of the org are directly comparable, then they must be doing the same exact thing, and why do you have multiple teams scattered around doing identical things? That is, a well structured org will have teams working on Thing A and Thing B. If a brand new Thing C comes along, it isn't either Thing A or Thing B, and you have no real information on how good either team will be at Thing C, all you can know is how good they are at Thing A and Thing B respectively.


At one time IBM had the resources to assign the same major challenge to two internal organizations.


Fair point, and that sounds pretty awesome for getting data on things, but a luxury most don't have. I suspect nearly every organization where this is the case is doing it accidentally rather than conscientiously.


You put it so well, having worked in both large and small companies - I can attest to this theory. An exception might be SpaceX where you have a strong dictator at the top and any annoyances of tribal war get direct authoritarian spanking.


> An exception might be SpaceX where you have a strong dictator at the top

I think a similar thing is happening at Amazon (the strong dictator stuff, I mean), and it's my hunch that Google has started following Intel's trajectory, they just haven't realised it just yet.


The more perceptive ones at Google have realized that and quit in mass over the past few years. Since Sundar and the rest of the McKinsey gang took over, it has been just death to Google by a thousand cuts.


Sundar's complete destruction of Google's massive goodwill has been spectacular. And sad.


I think he's an improvement over Eric "maybe-don't-do-it-if-you-want-to-keep-it-private" Schmidt


Em... Eric had a disaster PR moment, while Sundar is just turning Google into another IBM.


Similarly in US politics. And before we get emotional, how is it different from those examples?


All countries where there is a class of people who go into politics as their only career. The UK is riddled with them.


In some ways, this is the result of equality through having paid salaries for politicians. While the previous arrangement of having to be independently wealthy excluded most people from getting into politics, it did mean that it wasn't a career choice. It was something you did after you gained wealth and experience, at the end of a career in an actual industry and most likely running a business or being a senior manager, or you were wealthy to begin with. While the arrangement was not perfect by any means, it did mean that the politicians of that era were men of substance with a wealth of knowledge tempered by real-world experience. Politicians who go into it as a career straight from university lack any serious real-world experience outside politics, and I'm afraid to say they aren't up for the job. Their heads are full of political ideals but they mean little to the rest of us who just want competence and level-headed decision making.

I think there's a relatively straightforward and fair solution to the problem, and that is to require a certain amount of experience before being permitted to stand as a political candidate. You could do this by having a minimum age limit (e.g. 35), or by having been employed or being an employer for a certain number of years e.g. 15. This would ensure that the people representing us have gained a little understanding and experience of the world we live in and the real needs of the people they serve. Right now, I feel politicians of all stripes are almost completely divorced from the rest of us, and the consequences of their actions.

I do feel it is somewhat foolish that any job in the real world, from management through to the lowliest worker requires years of experience, multiple qualifications and certifications, training and assessment. But politicians require no independent assessment of their capabilities. The ballot box is not a high enough bar when all the candidates are of low quality.


While the arrangement was not perfect by any means, it did mean that the politicians of that era were men of substance with a wealth of knowledge tempered by real-world experience.

Yes exactly. I don't hold with the notion that the head of the NHS has to have been a doctor or the head of the MoD has to have been a soldier - there is value in being about to look at thing objectively with an outsider's perspective. But at the same time, I do absolutely believe that the Chancellor of the Exchequer should be someone, from any industry, who has employed people and had to make payroll on payday come hell or high water.

It would also benefit everyone if some life experience was required before becoming a teacher.

The ballot box is not a high enough bar when all the candidates are of low quality.

Agreed again, voting now is about holding your nose and choosing the least-worst. I want "none of the above" on the ballot paper and if it wins, the real candidates are banned from politics for life. Repeat until some decent candidates show up. But none of our incumbent politicians would ever pass that law for obvious reasons.


Absolutely agreed on all counts.

The point about teachers is also something I think is quite neglected today. A teacher with real-world experience of their subject is vastly better than someone who has known nothing of the world outside education. And that's of value far beyond the subject matter: they can provide career guidance and real-world perspectives of all kinds that less experienced teachers simply can't provide. Certainly some of the best teachers I had were those who had done real work.

Regarding the Chancellor of the Exchequer, I absolutely agree. But I think it should go further. I think all MPs should have direct experience of running a business and having to make payroll. Too many of them lack understanding of the reality of what a business is, and how the economy runs. It would temper some of the most extreme and dangerous actions, from taxation to social welfare spending. Too many think businesses are "rich" and can be taxed with impunity to pay for things of dubious benefit. It's easy to be profligate with other people's money if it seems like it's there for the taking. Experience might make them think about how to grow the economy to benefit us all, and reduce unnecessary expenditure. They might also think more carefully about supporting small businesses while ensuring large multinationals also pay their dues. I personally paid more corporation tax than several multinationals, despite earning less than the living wage.

For the first time, I set up and ran my own small consulting business for the past two years after being made redundant. (I'm also in the UK.) It was a small-time business with only one client for a couple of small contracts, and I since got a new permanent position. But the experience of having to do all the company registration, contract negotiation, invoicing, bookkeeping, dealing with accountants, paying corporation tax, and finally getting it all wound up was an invaluable experience which will likely be of great benefit in better understanding the businesses I work for. It gave me a proper appreciation of the realities of running a business, including all of the responsibilities you have to shoulder. I think every elected politician should have this experience. Every self-employed person in the country has a better understanding of the practical reality of economics and taxation than most of our politicians. And I think all politicians should be aware of the reality of what the rest of the country has to bear in response to the decisions they make.


A company and a sovereign state are completely different things.

Both have politics since they are structures made of humans.

But they are completely different.

A company is a legal entity functioning within the framework of the legislation of a state, to start with.


A company and a sovereign state are completely different things.

It depends on the level you look at. For example the political skills of a manager to advance themselves at the expense of the organisation and their subordinates is identical whether at Raytheon or the DoD (to use an American example).


Only the government can use force openly on you with complete impunity.


Despite their size, I think they're still in the startup phase where they have a large number of starry-eyed (pun intended) folks coming in because of the mission. That really is Musk's secret - you can get a lot done churning through people.


Is SpaceX already that big to be in the class of overweight titans like Intel?


You could imagine the division responsible for mobile chip design to be of similar size to a mid-sized company like SpaceX. The optics are the same. You're right Intel is massive - the logistics department at Intel doesn't have any friction with the teams tasked with mobile chip design.


perhaps not in absolute terms, however In the launch industry they are almost as massive if not bigger . spacex also employees 8,000+ people. That is a scale large enough for it to have similar problems of intel size companies.


that's a very astute observation

having worked at all three of

A Trillion Dollar corporation

A smaller company that grew quite a bit and sold for $5 billion+

A start up now

*

People optimize for what will make them rich/wealthy

In larger companies, the ENTIRE function of middle management is to extract wealth and not create it

In some ways it's a miracle Microsoft managed to get Satya Nadella as CEO. Normally the people who rise up the ranks are the politically astute


> Normally the people who rise up the ranks are the politically astute

Don't discount Nadella's political astuteness. I think it would be more accurate to say he is not merely politically astute, but instead able to combine his skill at internal politics with customer empathy and technical intelligence.


The conversation in Satyas own book of the trip to try to recruit some HP exec with Balmer where Satya pushed to take over some division and Balmer reluctantly agreed on the flight down to SF but told Satya he better be able to execute and quickly or he should work on his parachute skills, alone is hilarious when viewed through the lens you describe.


I just read that part of the book[0]. Actually, Satya says that Ballmer offered him the position because MS had to answer to AWS.

>[...] Steve had invested in it because [search][1] would require the company to compete in a sector beyond Windows and Office and build great technology—which he saw as the future of our industry. There was tremendous pressure for Microsoft to answer Amazon’s growing cloud business. This was the business he was inviting me to join.

>“You should think about it, though,” Steve added. “This might be your last job at Microsoft, because if you fail there is no parachute. You may just crash with it.” I wondered at the time whether he meant it as a grim bit of humor or as a perfectly straightforward warning. I’m still not quite sure which it was.

But yeah, funny, though.

[0] The book is Hit Refresh, if anyone is curious.

[1] It was Bing: >There was no mention of the cloud in that year’s shareholder letter, but, to his credit, Steve had a game plan and a wider view of the playing field. Always a bold, courageous, and famously enthusiastic leader, Steve called me one day to say he had an idea. He wanted me to become head of engineering for the online search and advertising business that would later be relaunched as Bing, one of Microsoft’s first businesses born in the cloud.


Thank you for the quote! I would be a bit skeptical of any highly ambitious executive’s self-described potentially revised historical narrative of their ascent, however his words are his words.

I don’t know about you but if my boss made a parachute joke while currently in a private plane flying south, it would be a scary thing.


I highly doubt you can get to the executive level of such a large company without playing politics. And playing it well. It's inevitable.


There was a comment in one of the Google biographies that the execs fought so much (maybe bill Campbell? RIP) that Sundnar became the person to unwedge the egos and therefore got selected on that basis.


It's hard to think of a 'successful' tech org without a massive ego at the top. So far Sundar hasn't shown he has what it takes. Is a massive ego required to drive these orgs?


What is it that makes you think Satya Nadella is not politically astute?


"When you’re scrappy and broke, the only path to success is to make the company successful. When the company is rich and multitudinous, an individual can gain more from politics and turf wars rather than actually trying to push an already high enterprise value higher."

Extremely well said, and having worked at two (initially) successful behemoths so far, I couldn't agree more.


If there is some Pareto-like distribution of who is creating new value, all of those others must be working at something.


I agree with you and a lot of the things people who are responding are saying but the one thing that keeps popping in my head is that Apple who is disrupting in this case isn't exactly the lightweight scrappy contender as put forth in the parables and anecdotes. If the hypothesis is true for Intel why is giant Apple not suffering from the same mercenary middle management / toxic political BS?


Apple historically has concentrated leadership to a T (see: the Steve Jobs mythos) I don't recall hearing anything about Intel being nearly as centralized.

Steve Jobs apparently demolished internal units, with the following outcome:

> As was the case with Jobs before him, CEO Tim Cook occupies the only position on the organizational chart where the design, engineering, operations, marketing, and retail of any of Apple’s main products meet. In effect, besides the CEO, the company operates with no conventional general managers: people who control an entire process from product development through sales and are judged according to a P&L statement.

https://hbr.org/2020/11/how-apple-is-organized-for-innovatio...


Apple is very differently organised from most big corporations, it has whats known as a functional organisational structure. Marketing, operations, software engineering, hardware engineering, etc.

Most corporations are organised around product groups, but what happens when the mobile products group comes up with a new device and wants to sell it to corporate clients? The enterprise products division might not like that. Imagine if Apple had a Mac division selling laptops, they might not have been happy about the Phone division coming out with the iPad and keyboard cases.


To be fair, I think many of their Mac-related problems over the past few years are a result of no one being able to defend the line internally, too. Management is a difficult problem.


I think this is a misunderstanding. The problems with the Mac have not been due to neglect, but due to very aggressive forward looking and innovative advancements of the platform some of which just failed.

You don’t develop all new low profile keyboard technology, an original touch sensitive display interface system, a new hinge mechanism, a custom integrated security and image processing chip (T2), and a unique 5k display system through neglect. The problem is some of these advancements were simply misconceived. It wasn’t lack of interest or investment, it was poor execution of some of the new developments.

Steve Jobs used to say that being better necessarily meant being different, but being different means taking risks and sometimes they don’t pay off.


Apple bought fell for the whole "Post-PC" mirage when the iPhone & iPad were on a tear: the desktop Mac was definitely neglected for years with no advancements while manpower was directed at iOS & iOS devices. What they did come up with, was the form-over-function trashcan Mac "Pro" with severe thermal limitations. They only got it right on their next attempt (some 2+ years later), after being publicly called out for neglecting their pro customers (Final Cut Pro X and the trashcan Mac were the final straw for a lot of video professionals who switched to Windows).

For a while, the software quality of new OS releases was night and day, comparing iOS and OS X. It was clear where the focus was. Apple's org structure has it's weaknesses.


The largest issue with the trashcan MacPro6,1 was complete lack of GPU performance. They weren’t even VR capable.


The biggest problem is the form-over-function design. The same applies to the new one as well, though it's not as bad. In both cases, they are missing out on the business market who need a workhorse. Both are priced out of most people's budgets. We end up buying Mac Minis and struggling with their limited capabilities and expansion options. Their whole strategy here is poor. They don't have any midrange options between the two extremes.


Perhaps we just have a different opinion, and that's OK. I don't work at Apple, so I have no deeper insight, other than to me, it looks like the Mac branch has been flailing with no clear direction for some time now. They may have the money to try things, but there no one steering the ship, so to speak.


I'm sure they have it, but it occurs to me that they did go through a near-death experience that might have caused some of the worst baggage to abandon ship, and for a while afterwards they probably knew that it was possible for even a company as big as Apple to go under.

Now that it's been a while, it wouldn't totally shock me if they ended up drifting Intel's way eventually.


Explained somewhat by the nature of the innovators dilemma and the products they're making: - x86 chips for PC were Intel's cash cow so doing anything to disrupt that revenue stream would be fought against - Apple's cash cow is the phone itself so they don't particularly have any dogmatic loyalty to one type of chip as long as the phone sells

Now if there were a technology to come up agains the iPhone (glasses anyone?) they'd probably face the same conundrum; disrupt themselves or be disrupted?


They did, famously, deliberately kill their own iPod market by introducing the iPhone.


That was completely conventional and expected per the innovators dilemma - they moved to a higher margin product with more features. iPhone was innovative and impressive for many reasons but it did not short circuit innovators dilemma.


Very true good point


Apple probably has similar problems, but this particular sub-organization is healthy for now.

Maybe because it's newer and hasn't rotted yet.


While Steve Jobs was alive, he served as Apple's lodestone and BDFL. It's only been 9 years since his passing, so any rot that might exist probably wouldn't have grown enough (taking into account their culture of secrecy as well) to be externally visible.


Pournelle's Iron Law of Bureaucracy states that in any bureaucratic organization there will be two kinds of people:

First, there will be those who are devoted to the goals of the organization. Examples are dedicated classroom teachers in an educational bureaucracy, many of the engineers and launch technicians and scientists at NASA, even some agricultural scientists and advisors in the former Soviet Union collective farming administration.

Secondly, there will be those dedicated to the organization itself. Examples are many of the administrators in the education system, many professors of education, many teachers union officials, much of the NASA headquarters staff, etc.

The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization.


That's an arvitrt judgment. Workers can be lazy and bad that their jobs, and leaders can enable workers to perform better.


> the greater opportunity there is for a manager to take wealth

Pieter Hintjens (creator of ZeroMQ) wrote a lot about this, and how organizations from big companies to small charities to software projects ought to design their internal structures of roles, titles, voting, auditing, to defend themselves against this kind of attack. Particularly if there's any kind of treasury, an opportunist will become tempted to take that role, but any kind of credit will attract people who want to take the credit without doing the work, power structures will attract people who want the power, and programmers who "just want to code" and avoid politics will find themselves at the bottom of a dysfunctional infected organisation structure with no power, no credit, and being instructed by distorted requests.

IIRC he tried to design the ZeroMQ organisation so that there wasn't any way for one person to insert themselves in a key point.

I'm not sure where on his blog I was reading that, but it's related to his writings about psycopaths[1] as the opportunists, and might be part of his book Social Architecture[2]

[1] http://hintjens.com/blog:_psychopaths

[2] http://hintjens.com/books


Do power structures in open source project really exist when one person can fork the entire thing with the click of a button?

I could be mistaken, but I think this kind of happened with ZMQ where one contributor got annoyed and went off to create nanomsg.


Of course; when Guido Van Rossum left Python[1], he titled his email "Transfer of power" and commented "I am not going to appoint a successor. So what are you all going to do? Create a democracy? Anarchy? A dictatorship? A federation?". No doubt a lot of people would like to insert themselves as new "Benevolent Dictator for Life" - imagine the uproar if Guido had appointed a successor who nobody had heard of, but had recently bought Guido a luxury yacht. What would "fork the entire thing" look like - would Apple do it? Google? FreeBSD? Microsoft? RedHat? Would the Python dev core fracture into competing groups of stay vs leave? Just because you /can/ fork it, doesn't mean nobody has any power or influence.

Or when people complain about Linus Poettering and Systemd, either how it came to be in certain distributions, or how it took over certain subsystems, or people's behaviour around it - isn't that politics and power structures?

Or that recent post on HN ( https://news.ycombinator.com/item?id=25076197 ) where the Cairo project appears to have nobody left who is able to make a release and the last few years of releases were done by one person working at Samsung, and there hasn't been one since he left, and presumably there was no organizational structure for how to distribute this power to several people, or how to choose and "promote" someone to the job, or if nobody wants to do it, how years worth of effort came to end up on the rocks where nobody even cares anymore, isn't that a broken or absent organization one way or another?

[1] https://mail.python.org/pipermail/python-committers/2018-Jul...


That sounds very interesting. Has there been any update on how successful or not his ideas are in practice?


I'm not sure. In his article on "Building Online Communities" (http://hintjens.com/blog:117) he says he experimented with building online communities for three years and gathering results, and that ZeroMQ is his biggest success (at being organised to defend to both vendor capture, or takeover by a single powerful/competent/rude contributor). For background of why that was his focus, he wrote this (http://hintjens.com/blog:125#toc15) about his experiences with working on OpenAMQ:

"""In late 2009, the Chair and Red Hat sat down and decided, in a secret meeting, to rewrite the spec. [...] From scratch. By himself. After years and years of committee work. After years of investment by others in working code. Without asking anyone except Red Hat. And then, to force this spec through the working group using his usual tactics: bullying and lobbying. [...]

One of my spin-off projects was the Digital Standards Organization, and I came to understand what was needed to protect a standard from predatory hijack. I summarized the definition of a "Free and Open Standard" as "a published specification that is immune to vendor capture at all stages in its life-cycle." What the "free" part means is, if someone hijacks your working group and starts to push the standard in hostile directions (as Red Hat did), can you fork the standard and continue? Does the license allow forking, yes or no? And secondly, does the license prohibit "dark forks," namely private versions of the standard?

If either answer is "no," then you are at the mercy of others. And when there is money on the table, or even the promise of money, the predators will move in. The AMQP experience gave me a lot of material for my later book on psychopaths. [...]

If we'd managed to build a thriving community around OpenAMQ, it would have survived. So the lesson here is simple: community before code. Today this is obvious to me. Eight years ago, it wasn't."""

And reference this (http://hintjens.com/blog:120) Social Architecture FAQ which talks about rude but highly skilled people contributing to projects (read that with "community before code" and the quote "if you want to go fast go alone, if you want to go far go together" in mind).


They're unsuccessful.

His book on Psychopathy was panned on Reddit, as well as on Goodreads:

https://www.reddit.com/r/IAmA/comments/4c27ss/im_pieter_hint...

Likewise, the wikipedia article for Social Architecture has been listed "not notable" for the past four years, and hasn't seen any meaningful edits since 2016.

https://en.wikipedia.org/wiki/Social_architecture

With respect to the author, I'm not surprised. Nerds like ourselves enjoy "discovering" ideas, slapping pretentious labels on them, and then evangelizing them to the whole world... without checking if other people know more about those ideas than we do.


> Likewise, the wikipedia article for Social Architecture has been listed "not notable" for the past four years, and hasn't seen any meaningful edits since 2016.

Pieter Hintjens died in 2016.


I'm not seeing his book being panned on your Reddit link, I'm seeing ad-hom attacks on him because of his lack of credentials or some unrelated behaviour they don't like.

Is there any actual panning of his book by someone who read his book, in your link? Or anyone who engaged in good faith - e.g. it's a book about his experiences of problem behaviour patterns that hurt others?


The author himself said his book was panned on Reddit: https://news.ycombinator.com/item?id=11631424

The larger point is that the audience he chose to market to, rejected his ideas. Whether or not his audience was acting in "good faith" is irrelevant to his failure to communicate.


It’s time horizons too. The time horizon of creating the next great thing can be 3-5 years. An office politician can spend a lot of time grabbing value rather than creating it in that time span. This is one reason to be suspicious of senior folks who job hop too much.


I suspect some of it is also driven by 'impact-based rewards', i.e. valuable projects get the team paid more.

It means there's immense incentive for teams to monopolize skill and reduce other teams' value so they get picked. It also encourages posturing as strongly as possible, rather than accurately, since it's all based on perception - it's not like you can prove you'll do a better job than another team. To make it worse, if a highly valuable project seems like it might fail, the company will likely throw more resources at it - the risk to the team for getting in over their head is relatively low (and still looks great on a résumé), compared to the risk to the company.


I was thinking precisely this as I read parent's comment. These are managers/teams fighting in zero-sum game (or something they THINK is a zero-sum game) for a bigger share of stock bonuses of the existing extremely valuable company. Take that win and that bigger chunk of a massively valuable company, and in a way you don't really care what happens with the phone chip market; by the time it really pays off big, in 6 or 8 years, you might be somewhere else anyway.

Whereas, in a smaller company with a smaller pie to divide amongst folks, the growth of the pie is the most important thing.


Seen and been subjected to this many many times.

Startup going well, productive team and people to work with.

Some guy who sucks with years of sucking experience joins to “manage” the company/team/department into the ground.

I work for a not for profit now and we get paid less, very few people suck.


In sum humans do not always optimize towards one variable like success / fitness.

Instead they optimize to many things like money, prestige, power, security, and so on. Much like you can't describe health as a single thing, when you collapse your insight to a single variable you miss everything in your simulation of how reality works.

This applies to an individual level but especially applies towards a cooperative level with multiple individuals. When we cooperate (or just act in ambiguous situations) sometimes trying to collapse your actions to "mental shortcuts" aka Heuristics in Judgment and Decision-Making to borrow some thinking fast and slow language further reduces insight.


Some institutions do persist for a very long time. The Japanese monarchy, the Catholic Church, and the old universities at Oxford, Bologne etc, are particularly prominent examples.


I agree, and I'll add more to it. In a very small company, there simply isn't enough space for any individual team or segment to diverge significantly from the overall success of the company. When your company is very large, often the needs of various sub-groups might not be in alignment with overall company success, even if they're still trying to make money for the company.

A great example comes from this article. The group in charge of the Pentium product line would not be thrilled to see the Celerons launched, since that would cannibalize their sales. Obviously for long term success of Intel this was the right move, but from the perspective of the Pentium team this might be a horrible idea.


>the greater opportunity there is for a manager to take wealth instead of make it

Much like countries and empires. Once growth largely stops being the easiest way to capture wealth "entrepreneurs" start eating away from the inside.


You can include wealthy nations in that list of entities. It's not just the leaders but also the individual workers.

This is not a slam against people from wealthy nations. I am one of them. I just think it's a danger to be aware of.


Reminds me of this article I just read. In a totally different domain, but if there is so much wealth, it is easier to just a part of it, instead of creating new wealth.

https://pedestrianobservations.com/2020/11/13/surplus-extrac...


What you're saying might/might not be true in general, but it is not relevant to Intel. Intel has for decades had internal teams compete. It's a purposeful choice that they stuck with despite despite the numerous times it has failed them, the blood on the floor, and the departure of engineers they wanted to keep.


Absolutely my experience.

However the small companies almost always have VC investors, which brings negative effects which are approximately as awful (just in totally different ways).

If you can find a small self-funded company that isn't gonna get squashed next time Masayoshi Son sneezes, it'll be the best job you've ever had.


the greater opportunity there is for a manager to take wealth instead of make it

And what I view as completely irrational is the fact that in so many cases, the self-interested option is actually to work together and do something meaningful that advances you far more than chasing a quarterly metric.


Essentially what Steve Jobs said about product vs sales & marketing people and Xerox, but more regional/department version.

https://youtu.be/P4VBqTViEx4


This is why extremely steep marginal takes rates improve economic efficiency. If you can’t get rich by playing office politics you might as well enjoy making a good product.


> If you can’t get rich by playing office politics you might as well enjoy making a good product.

It seems to me this would go the other way. Making a good product (in the sense that it's successful in the market) is what you want a company to do, but that's the thing mostly aligned with the profit incentive.

If profits are taxed heavily then spending resources on making a better product so that more people buy it doesn't make you a lot more money, but spending resources on office politics brings non-monetary personal benefits like status and control, so managers would have a greater relative incentive to do those things. See e.g. Soviet Union.

One of the biggest problems today is actually that the tax code provides incentives for corporations to retain profits within the organization rather than paying them out to shareholders to invest in something else, which results in inefficient corporate bloating.


Corporate tax rates can be lower, ideally at 0, but I doubt our current system incentivizes corporations retaining profits. On the contrary, recently many needed to be bailed out because they paid out too much to shareholders instead of saving enough for a rainy day.

Making a better product also doesn't necessarily lead to profits. That's an oversimplified perspective. There's a far greater correlation between monopoly power and profit. In some cases this monopoly status comes from truly groundbreaking work. Not in all cases though.


> Corporate tax rates can be lower, ideally at 0, but I doubt our current system incentivizes corporations retaining profits.

Multinational corporations commonly avoid taxes by keeping profits offshore in a low tax jurisdiction. In order to pay the money to shareholders they have to repatriate them and pay corporate income tax on the money, and then the shareholder has to pay tax again on the dividend or the capital gain from the buyback.

Or they can leave the money where it is, not pay any of those taxes and invest it in some index fund from there. But then the money is stuck inside the corporation, and gets invested in some index fund instead of potentially going to some higher risk/reward investments that some of the original shareholders would have chosen.

> On the contrary, recently many needed to be bailed out because they paid out too much to shareholders instead of saving enough for a rainy day.

COVID-19 isn't a rainy day, it's a once in a century pandemic. Most companies have never made enough money to be able to survive it, and the ones that did still shouldn't be hoarding it just in case. Systemic problems get solved through systemic actions, e.g. lowering interest rates or sending stimulus checks. To expect companies to have saved enough money to survive rare systemic events without that is to penalize and destroy any company lean enough to have been unable to save it to begin with.

> Making a better product also doesn't necessarily lead to profits.

Making a worse product in a competitive market has a strong relationship with going out of business.

> There's a far greater correlation between monopoly power and profit.

But that's an independent problem. The solution there isn't to raise taxes, it's to break up monopolies.


> Multinational corporations commonly avoid taxes by keeping profits offshore in a low tax jurisdiction.

That does happen, but many including the biggest tech companies found a clever way around it. Via a combination of ultra-low interest bearing bond sales and share repurchases. It does the same thing but avoids those taxes. So there's no incentive for profit retention on net.

> COVID-19 isn't a rainy day, it's a once in a century pandemic.

Part of going into business is accepting risk, including systemic risk. That's why the corporate veil exists and it's generally positive for the overall economy when weak businesses fail. A strong business would have an operating model that can withstand such risk.

> To expect companies to have saved enough money to survive rare systemic events without that is to penalize and destroy any company lean enough to have been unable to save it to begin with.

Maybe those companies shouldn't have been in business then and that capital would have been better deployed somewhere else. I'm surprised that opportunity costs of capital allocation are ignored when defending bailouts. They introduce market inefficiency and reward complacency.

> Making a worse product in a competitive market has a strong relationship with going out of business.

In principle, yes. In reality, most markets are not competitive to that degree. Perfectly competitive markets don't exist in a modern regulated economy. We can talk about removing regulatory burden to make this true, but that's not where we are now.


> That does happen, but many including the biggest tech companies found a clever way around it. Via a combination of ultra-low interest bearing bond sales and share repurchases. It does the same thing but avoids those taxes. So there's no incentive for profit retention on net.

The loans allow the money to be transferred back into the domestic entity, but there are then restrictions on issuing more in dividends or buybacks than you have in profits, and borrowing isn't considered profit. So they can get the money back into the US and use it for wasteful empire building but they can't give it to the shareholders -- it makes the problem worse.

And buybacks have more advantageous tax treatment than dividends but they're still taxed as capital gains on the difference between the price you bought the shares and the price you tender them for, which doesn't happen if the money stays inside the corporation.

> Part of going into business is accepting risk, including systemic risk. That's why the corporate veil exists and it's generally positive for the overall economy when weak businesses fail. A strong business would have an operating model that can withstand such risk.

You're using two incompatible definitions of weak. In the first case you want weak businesses to fail because they're inefficient, i.e. they're consuming more resources than they produce and go bankrupt. In the second case the business fails because there is a situation resulting in a temporary loss of revenue (shutdowns) but the business still has operating expenses (e.g. rent) and is too efficient to have enough slack to bridge the gap. The inefficiency wasn't caused by the business occupying space it couldn't use, because at the time nobody could use that space -- the loss was a sunk cost to society at large.

If you let them fail then when the temporary situation ends there is greater market concentration because the leanest companies failed, which allows the remaining ones to charge higher margins due to the reduced competition to the detriment of their customers or suppliers.

> I'm surprised that opportunity costs of capital allocation are ignored when defending bailouts. They introduce market inefficiency and reward complacency.

This is why "bailouts" should be distributed uniformly without regard to risk of failure, e.g. through lower interest rates or fixed sum payments to everyone. Then you're not rewarding the most precarious companies because they get no more than anyone else, but you still prevent them from failing. Meanwhile those in a stronger position can use the money they received to increase investment or consumption, which is what you want in a down economy anyway.

> In principle, yes. In reality, most markets are not competitive to that degree.

It still works like that in practice much of the time. Blockbuster's product wasn't as good as Netflix and now they're gone. GM's product wasn't as good as competing ones and it led them into bankruptcy. Blackberry, Sears, RadioShack, Circuit City, AOL.

It could happen more than it does, but it still does.


> but there are then restrictions on issuing more in dividends or buybacks than you have in profits, and borrowing isn't considered profit. So they can get the money back into the US and use it for wasteful empire building but they can't give it to the shareholders

The argument has been subtly shifted here, from businesses not being able to return capital in the form of dividends due to taxes to business not being able to do so due to loan covenants. Irrespective of the latter point, it is clear that given low interest rates, repatriation tax policy has no effect on incentives for capital repatriation. The biggest most profitable companies that pay out large dividends and do share buybacks make frequent use of this.

As for the loan covenant restrictions, from an economic stability perspective, it's very poor incentive design to allow companies to repay dividends when they're not profitable. This is in effect what you mean when you say "restrictions on issuing more in dividends or buybacks than you have in profits". A business thus impacted should focus on getting profitable. Liquidation is also an option. Dividends and buybacks should not be.

> This is why "bailouts" should be distributed uniformly without regard to risk of failure

This policy sends a message that foresight doesn't matter and reinforces a short-term oriented time horizon. Not only is that worse for society, it's worse for business in the long-run due to investment disincentives. Providing temporary support to individuals or local small businesses to preserve communities, especially vulnerable ones, is a different story and this is not an argument against it. Bailing out large well-managed businesses on the other hand makes no economic sense, these businesses can easily be restructured, sold, and given a new lease on life under new ownership. Antitrust concerns should be handled by courts, not by bailouts.

> increase investment or consumption, which is what you want in a down economy anyway.

Not all investment is equal, some investments have a much better return than others. Good policy incentivizes the latter as opposed to incentivizing indiscriminately. Investment for the sake of investment leads to waste and is bad for the economy in the long-run.

> and is too efficient to have enough slack to bridge the gap. The inefficiency wasn't caused by the business occupying space it couldn't use, because at the time nobody could use that space -- the loss was a sunk cost to society at large.

This is very oversimplified. Larger businesses can tap into financing, working capital, and have many other tools available to deal with this. And if they can't, that's not efficiency, that's poor operational discipline. Since the owners stand to gain when such a business does well, they should be ready to lose when things go poorly. It shouldn't matter if the cause is one akin to a force majeure.

> Blockbuster's product wasn't as good as Netflix and now they're gone.

Netflix had a much lower cost of capital in the beginning and also a much lower fixed expense. In their case not having storefronts improved the product but that's not a fixed relationship. One can come up with many cases where the opposite holds. Therefore it only applies to that particular case and doesn't support the general argument. Similar nuances exist for the other anecdotal examples listed.

F.A. Hayek provides great arguments against government intervention in the economy. There can be a role in providing assistance to individuals or communities, Hayek was famously supportive of both, but it's important to not let that bleed over into market dynamics which must remain competition-based.


What is "extremely steep marginal takes rates" ?

Are you saying that low profit enterprises create more value? The thesis is attractive but also makes me wonder about Apple, high profit and also highly creative.


I think they meant 'tax rates'


holy cow this is so, so apt. i am stealing this explanation for the rest of forever. this is exactly why this happens... i wonder how to structure organizations / incentives to alleviate this.


Not just attracts - rewards and cultivates, too. People already working somewhere don’t have immutable personalities. If an employer is a bad influence on its employees, problems will manifest not only with new hires.


Why: giant egos. There were turf wars

Ex-Intel engineer here that worked in R&D- I saw the exact same situations during my time and was one of the reasons I left. The way that teams in the same org with the same goals couldn't or wouldn't work together was incredibly frustrating and such a massive waste of effort. At the time I thought it was a unique situation with our group, but it's sad to see that it's more of a systemic issue that still hasn't been corrected.


There are the opposites as well, I've seen companies that would send you home for even a hint of ego. Essentially "I" was forbidden as it is not in the "TEAM". I much prefer "we" culture than "I" though.


I used to work there too but from software perspective, they simply did not understand how software world worked. S/W was also relegated as bunch of coding guys. It was not uncommon to see some random benchmark touted to assert superiority, while customers were using it differently. It'll end up becoming a manufacturing company that builds based on spec from Amazon/Google/Apple etc.


Unlike NVIDIA, which is a “Software Company That Produces Chips”. And unlike Apple, which is a software company that sells hardware to run that software.


Pretty funny, even the one little project I did for Intel as a contractor, apparently there was another group working on their own version of it and mine reeeeally had to beat theirs to be considered a success, and for my client to be considered a success. I think I pulled it off, but it was interesting being a pawn in someone else's chess game!


That’s why Netflix had that epic comment along the ways of “Enron had engraved Integrity into the marble of the main lobby its corporate headquarters. True values of a corporation get reflected in who gets rewarded, who gets promoted, and who gets let go.”

People who are political as a prerequisite need to believe there is both a reward and a low fear of retaliation/termination for their actions.


Didn't Steve Jobs usually greenlight 2 or 3 teams to design then just pick the most promising one to go to production?

Sometimes it makes sense to have a bit of competition, but having multiple prototypes (often with slightly different design parameters - e.g. a reliable design, an advanced design, and a moonshot) is a better competition than having managers attacking each other with passive-aggressive Powerpoint shows in a conference room.


I worked at a company founded by an ex-Apple CEO. He tried this technique because Steve Jobs did it. The problem was that it doesn’t work at all unless you truly have 2 or 3 entire teams of fully competent people to pull it off. It can work at Apple because they’re absolutely massive and they only focus on a very small number of products.

In our case, making two teams work independently on the same problem just created a situation where each team had half as many resources.

Given Intel’s history of paying median rather than top wages and losing their best employees to competitors, they didn’t even have one full department capable of pulling this off, let alone 2 or 3.


I think they are doing it wrong then. You do the cheap parts independently. Then you pair down to fewer choices and transfer resources to fewer groups.

You repeat this multiple times depending on your resources. When things get really expensive, you should be down to one idea.


I'd imagine that this can be pretty tricky with more complex systems, where the buy-in for even getting off the ground can be pretty high. In software, that's usually not the case, but I can imagine a number other instances where it wouldn't work. This seems like it would show up the most when the problem isn't a greenfield where getting the first 90% working is the hard part, but on the other end, where the difference comes from optimizing the last 10%, which can be really expensive (i.e. figuring out how to profitably and effectively take production from 1000 parts to 10,000 parts can be much more difficult and costly than moving from 100 to 1000).


Having seen a couple of Jobs interviews, it seemed like he was very well informed of all the trade-offs he was asking. He had some idea how all the bits in his project would scale (in various ways - with more users, programmers, parts, etc) even if he didn't like getting his hands dirty with the work of actually doing it.


> In our case, making two teams work independently on the same problem just created a situation where each team had half as many resources.

You got it. Not to mention that you're incentivizing them to hoard work/attention/resources from one another, or engage in other organizational anti-patterns that would obviously be/become problems to solve if you didn't put the artificial "constraint" of internal competition in the way.


This is very interesting. Just because a technique works for situation A doesn't mean it works for situation B. Also, maybe this ex-Apple CEO didn't have the capability to pull it off.


Reminds me of the legendary turf wars between SEGA of Japan and SEGA of America - that cannibalized the company, and resulted in shameful, consumer-insulting mistakes like the SEGA 32x.

These issues saw them going from 50% of the console market in 1993/4 to folding console operations less than a decade later.

SEGA is my favourite example of 'too big to fail' and turf wars.

Console Wars is a great book about this story.


It's somehow incredible how this infiltrates even the most advanced classes of company in terms of technical knowledge. You'd expect a bit more objectivity


The thing about the top tech companies, particularly Intel and other chip companies where postgraduate degrees are more or less required to do anything interesting, is that people there have literally gone though over a decade of intense competition in the educational system to qualify to get their foot in the door. And then, to stay there and rise through the ranks is another decade or more of intense competition, both technical and political, against similar elites. Needless to say, this can have some pretty corrosive effects without proper supervision.


Recipe for a toxic work culture from day one. Your team should never be your competition. Your company should never be your competition. Meritocracy works only in certain markets. Knowledge-based work isn’t one of them.

I really feel for people who are in these companies. Who put in 6+ years of university/higher education and think that “if I could only fight my peers for the top spot” when “top” is relative.

To someone else’s point, I totally acknowledge that there are certain industries that you have to have a degree to do anything meaningful. I also know there’s tons of companies out there that need bright minds to create the next big thing.

I used to think that I was in competition with my coworkers for that promotion, for that top spot, to basically prove to myself that I could and prove to others I should. I stopped wasting time trying to prove what I already know and now focus on what I don’t.

Office politics sucks and I refuse to play the game. If I get passed up for promotion or if I’m unhappy I leverage my rights under my states at-will work laws and go find somewhere else to work.

The best places I’ve worked where this wasn’t a thing (the elite competition for who’s right) were places that acknowledged your contributions, explained why you did/didn’t get the promotion, laid out plans for you to achieve that (or have you on that list for when it opens up) and then actually honors that promise.


Any competition with sufficient rewards attracts this kind of toxicity eventually and that is why larger companies are usually the worst. Getting a promotion at a small company in the EU or in the midwest USA means you get maybe 10k/year extra. Getting to the next rung in FAANG (or Intel, apparently) can mean hundreds of thousands of dollars over the years, so it's not surprising people are a lot more willing to backstab their peers in those places.


Yeah, I’m not anti competition. I’m anti in-fighting. I also blame companies that structure their pay so radically that this fighting for next rung because it’s hundreds of thousands more is a thing. It shouldn’t be. Maybe a 10% bump, but higher up the latter should mean more responsibility and more importantly, accountability. If that justifies the extra pay then so be it but getting a promotion shouldn’t mean the difference between hundreds of thousands of dollars.


Requirement to be a graduate of a "good" university is not a meritocracy. It's just preferring people who have a diploma, but that doesn't mean the person hired will have other qualities necessary to be effective.

Edit: I know some people with diploma who "gamed" the system, e.g. memorized queries for exam without having any understanding of SQL and passed.


>Requirement to be a graduate of a "good" university is not a meritocracy.

I didn't say it was. Meritocracy in this case was in reference to the company's internal competition (organic or not) for top spots at the company.

I completely understand there are certain jobs where you need a degree. Bar even. A lot of software engineering doesn't fit that mold. I'm 100% behind requiring an advanced degree if you're writing software that controls: lives/property/money in any meaningful way. Money in this case being large capital investments, not consumer payments. I'm a big proponent of hiring people who need a second chance, who maybe are down on their luck, or otherwise have a different path than suburban white America or the H1B visa mills they created (caveat, I'm white). I think there are plenty of opportunities for software engineers without degrees to build web apps, mobile apps, services, integrations, automation, etc.


Mutual cooperation is often spurred by personal and professional competition. I know I've been with teams that I had to drag through projects because they "get paid the same" whether or not it ever gets finished, and well-complemented teams where we forgot everything else while we built on each other's achievements.

There's selfish, winner takes all competition that creates winners and losers, and then there's healthy, productive competition where everyone who engages comes out better off for challenging themselves against a peer.

And obviously, there has to be a culture that provides resources no matter the outcomes. If you starve the 'losers', you're just creating desperate bait dogs in a toxic work environment.


Head Fakes. Learning spikes under the guise of competition are amazing tools. Hackathon's, etc. Competition internally with this cut-throat attitude (because so much money, potentially, is on the line) is toxic and can create some serious mental health issues. This isn't a discussion of competition in the markets, free market and all, it's about competition with the people you work with day in and day out. It can create animosity and anxiety at work. I would much rather work with a genius who's humble than an asshole who is just looking for the next promotion.

To truly grow as a company you need to grow as teams. Cooperation, trust, empathy for your customers as well as your coworkers. Positions of leadership should be given to those who show an aptitude to lead through problems to desirable outcomes, regardless of who/where they are. Flatter org structures to prevent this chasm of salary difference which creates this toxicity in competition for those spots.

Netflix pays their engineers a ton, because they provide the value. Everyone is compensated at a level that everyone is happy working there (for the most part, happiness is relative, I digress). Companies should be providing everything they can to those who provide value to the company. If an individual wants more responsibility, they must provide more value as a result.


> people there have literally gone though over a decade of intense competition in the educational system to qualify to get their foot in the door. And then, to stay there and rise through the ranks is another decade or more of intense competition

Two decades of requisite grunt work will also encourage your best junior talent to seek out start-up opportunities.


Bingo! After a few tries one will always look inward (healthily or not) and seek out other opportunities.


> "Two decades of requisite grunt work will also encourage your best junior talent to seek out start-up opportunities."

For silicon and hardware in general, the job market is much, much smaller, making that decision significantly more difficult.


> Needless to say, this can have some pretty corrosive effects without proper supervision.

And half the time the supervision is the exact opposite. Concepts like stack ranking are a sure-fire way of making the workplace more contentious, politicised, and publicised.

Because that's the thing about

> intense competition

95% of the time it's about visibility, the technical excellence only matters if it's visible and publicised to decision-makers, which means self-aggrandisement and a mastery of office politics not only can compensate for lacking technical skills, but will usually edge out technical skill entirely.


One value that cuts through most of elite society is the thought that competition produces better outcomes. So “competing” instead of collaborating with another office is good, actually, because competition will breed excellence (or something).

When you’re small all that competitive energy is directed at your competitors. When you’re king of the hill that competitive energy turns inward.


It's actually both, a competition that relies a lot on collaboration.


They should compete against the odds instead.


Indeed, for we computer geeks are known for our lack of ego...


Absolutely, not. Seeing my 4 years old son experiment with social skills I’m learning soo much about the most inexplicable conflict situations I’ve been through in 15 years of IT


I completely agree with this. I’ve been in software since ‘98, but many of the valuable tools I apply to interpersonal interaction were learned on the path of raising my 5 year old.

Natural consequences, timely feedback, parallel modeling, Socratic questioning, mirroring...

If you want to know how to handle the politics of a large organization, try interacting with children. The parallels are amazing.


Yup, I'm considering offering to help with programming extracurriculars at what will become his school just to get a sneak peek at how the older children deal with unfamiliar abstractions. I've been living in a bubble of tech savviness and contempt for as long as I remember, I need to reset.

And he's actually 3, so I can't really sit around and wait another 5-6 years before I get to figure out the perfect "business customer" :D


Totally, children are a wonderful display of human nature and a chance to understand ourselves better, I’m feeling amazed by my interactions with a niece, 1.5 years old. She so cunning already, for example she accidentally puts a toy on a side of the bed so it falls over, I get it and she immediately starts to again put it juuuuust on the edge so it would look like she did it accidentally and it fell over. I think I finally stopped getting microgrudges towards people behavior that day :) We just like to play


Super super interesting and I'd love discuss - drop me a note (contact info in my bio) :)


Same observation, from my cats, as I do not have kids. Whenever they come, the cats will meaow right away, as they do not have delayed gratification. They also want fairness, otherwise they will attack each other. The food has to be given at the same time and same portion to each of the three cats.


Pardon me but I'd consider semiconductors experts different from computer geeks. Seems like I'm wrong.


You don't use computer science to make sure people are properly incentivized.


Tech nerds are some of the least objective (and unknowingly too) groups of people I have ever known.


Maybe the feeling of being on top of the trendy tech and above the norm .. hubrys in a word.


Are you talking about Atom or XScale? XScale seems the biggest missed opportunity. I can understand the thinking that being just another ARM vendor was a step down from being the owner of an architecture (AMD, and Via, notwithstanding). But with steady improvements and investment, and riding the coat-tails of fabs Intel built, Intel could have been a an actual mobile contender today.


Bonnell which became Atom. I was pushing to make XScale be the 1GHz multi-core lead vehicle but was soundly mocked. XScale died on the vine. You're 100% correct, it was a hyuge missed opportunity.


If you want turf wars, even if everything else is perfect: Create turfs. The worst period (and the only bad period) of my own company was during the time when teams working with overlapping stuff worked at different locations (at the time this was simply because we didn't have the room. Fortunately we could get back into the same site when we moved to a bigger location, but it took years to heal what the separate locations cost us).


Strange, I would categorize the problem we had was that most groups had a super-majority of people in California who would end up reaching social consensus instead of having technical arguments.


A former Intel engineer here. Paul Otellini -- a non-engineer -- was not a right man for the CEO job. He instilled a bean-counting culture driving technologists out of the decision-making process. It was downhill ever since he became the boss. And as mentioned elsewhere a counter productive and oftentimes adversarial relationships between "sister" design teams (Santa Clara, Jones Farm, Israel, Austin) was a huge drag.


> There were turf wars between Austin, Santa Clara and Israel over who would design it, and the team that won out had long since lost its best principle engineers

So which team/location ended up winning?


"winning" is hard to define here, because the work became so diffuse (meaning, several sites grabbed the Bonnell/Silverthorn core and started whacking at it), but on paper (foils?) it was the Austin fellows who got the gold star. Some readers who were there may disagree.


I used to work in Austin. I always thought Austin was the Atom HQ of sorts because that was where ARM was also located. And at the time it seemed like Austin was the mobile epicenter of the chip world. But yeah, at Intel there has always been a rivalry of which location owned what projects. Thank god I'm not there anymore.


Wherever the teams at AMD, NVIDIA, Appme and TMSC work out of.


Do you think Intel can come back, or is Intel too dysfunctional/inefficient to recover?


I had friends at the Intel fab in New Mexico in the 90s and this type of turf war was prevalent even then internally at the fab. Managers were constantly jockeying for position at the expense of manufacturing processes.

Just one anecdotal story, one of the wafer guys had an idea to improve a process. He went to his boss who shot it down instantly. Wafer guy knew he was right so went to another manager who liked the idea and implemented it. It ended up working and being a great idea. Instead of being rewarded for his idea, wafer guy was fired by his manager for failing to follow the chain of command.

It's built into the culture at Intel to be territorial to advance. It's been that way for at least 30 years. I personally don't see it changing any time soon.


I'm currently reading High Output Management which is 25 years old. So your comment addresses the time where the book was written.

Grove certainly sends quite utilitarian vibes. He also says that a collaborative culture is important. Your comment suggests that neither he nor his successors managed to make Intel's culture collaborative enough.

Higher management is probably often an iterated prisoners dilemma. Collaborating is best for the company but defecting is good for the ambitious manager.


Is is a good book you'd recommend otherwise?


Not the parent, but I read the book, and the principles have become so common nowadays that I never really had a wow moment. It was more like "Yeah, this guy is telling me things I already know." It is nice to see the common knowledge built up from scratch, rather than just automatically accepted the way it is, but I wouldn't consider the value you get from the book worth the time, today.


I agree that you get most of it by reading HN for a few years. However, it is a small old book so not that much of an investment.

It did have a few ideas which were new to me and it presents a holistic consistent management philosophy which you don't get from reading a hodge-podge of blog posts.

Two (for me) new ideas: Matrix organisation is inevitable for large companies in search of the sweet spot between agility and efficiency. Knowledge workers are middle managers.

Overall: Good book but not "you have to read it" level.


The difference is that back then Intel still had a quasi-monopoly. Even AMDs occasional successes from the late 90s on were usually temporary, and never an existential threat to Intel, so they could afford a certain level of internal dysfunction.

But we are now living in a world where mobile is more important than desktop, and mobile chip designs have enough power to be a threat to the desktop market, and perhaps even the server market in the not-too-distant future. This is something that should worry Intel.


I think they have been a commodity for a long time. Rephrased, the discussion is similar to, "Will GE make a comeback in the refrigerator market?" Intel isn't going anywhere, they have many, many big customers to satisfy who simply cannot shift to ARM due to ROIs <=0. And after 30+ years of this, I also don' think there's anything new or revolutionary in silicon fabrication on the horizon. I stopped getting excited about CPU tech when AMD crossed the 1GHz barrier in the PC space. Could be burnout though.


> I stopped getting excited about CPU tech when AMD crossed the 1GHz barrier in the PC space

That was only around 2000. There was a lot for me to get excited about since then:

* AMD64

* SMT

* Dual-core / multicore

* On-chip integrated graphics that weren’t awful (low-end discrete graphics aren’t a thing anymore)

* Gigantic L3 and L4 caches (a side-effect of on-chip graphics)

* Turbo mode making a comeback (yes, I know Turbo Mode switches actually slowed down computers)

Judging by Apple’s direction, it’s looking like we’ll start seeing actual (ultra-fast) RAM on-chip too, with Optane that could mean we can finally move-away from block-addressed storage and have a massive flat memory-space for everything .


ADM64... Aka Yamhill on Intel. Holy shit I had to sign waivers and get a special Unix acount just to ACCESS the RTL model with Intel's 64-bit iHDL.

I kinda laughed when AMD beat Intel to the punch not once, but TWICE. (1GHz, then 64b).

I also worked on the 740, 810 and 815 (both hardware and drivers, oddly, I did a lot of stuff). Game devs HATED us for the 810/815 because it was such huge volume they had to target it as the LCD.


Can you elaborate on all of that? I'm unfamiliar with "RTL", "iHDL", "740", "810", "815", I assume "LCD" is lowest-common-denominator?


Correct me if I am wrong.

RTL https://en.m.wikipedia.org/wiki/Register-transfer_level

RTL is a way to model a digital circuit and some common HW design languages are VHDL, Verilog amd Intel's proprietary iHDL.

Numbers are just Intel chipset families.


> And after 30+ years of this, I also don' think there's anything new or revolutionary in silicon fabrication on the horizon.

Why so? Few more material, and device generations are on the way. Litho has went through more generation changes in the last 1 decade, than in any previous one.

Whole new device classes are on the way: optics on silicon, new memory, logic in the backend, and logic on package.

Semiconductor engineering is still a career suicide though, that haven't changed in the last few decades.


This is my "640K should be enough comment", but we've reached the limits of optical fabrication with ~2nm. And really, it is just smaller transistors. Yes, that glibly does a disservice to the process engineers making this miracle happen, but I can get excited about "look, smaller transistors!" only so many times in my lifetime. :)

> optics on silicon,

Yep! That and quantum and post VNeumann architectures. There's some weird shit on the horizon that I probably won't live to see in household products, but I don't think it will be be optical deposition by layer. It might, who knows, I"m officially talking out my arse now. :)


Optics may come earlier than people expect.

And not even for I/O, but for on-die communications, and busses.

High speed SERDES may well be replaced by same-die optics, as a lower transistor count, dumber, and cooler solutions.


> Semiconductor engineering is still a career suicide though, that haven't changed in the last few decades.

This is so true. I am currently a hardware engineer and planning to switch to software. If you are not in Nvidia, Apple or other mega-growth hardware companies, you are suffering atleast 40% paycut compared to software industry and also working about twice as hard, using obsolete tools and flows, and almost no opportunities for technical advancement or significant impact. Meanwhile the annual raises are cost-of-living adjustments. Its truly depressing.


> Semiconductor engineering is still a career suicide though, that haven't changed in the last few decades.

Wow, so I need to thank myself for accidentally bumbling towards ops and dev? Meh, for me it’s just that in EU there’s a total dearth of opportunities in the industry


People making WordPress websites are making more than PhD research scientists in Taiwan, and Korea.


TSMC announced that they're raising salaries by 20% across the board, so that should improve things a bit: https://www.youtube.com/watch?v=T6DaxXukog4


And they cut the bonus at the same time.


Depressing


> Semiconductor engineering is still a career suicide though, that haven't changed in the last few decades.

Could you go into more detail?


Most of semiconductor jobs are in Asia, and in the same Asian countries, you will get n times less than a software person of the same seniority.

The long term is that the industry will get less, and less labour intensive as more, and more old, labour intensive fabs are retired, and new ones are 100% AMHS.


AMD came back. Fabless, bruised, but also thin, agile and with a 10 year plan that thanks to TSMC they can execute more or less on schedule.


AMD came back from being second fiddle in the right market. Intel would have to come back from being first fiddle in the wrong market.


Competition is good for markets. There's still not enough competition in the general CPU market, but it's getting closer: Intel, AMD, Apple, Qualcomm, Samsung, Broadcom.

Intel has money and equipment and existing market share: there's no reason for them not to be profitable for the long term, as long as they stop acting like they don't have any competition (as they did until 2018) or that their only competition is AMD (as they currently are pretending).


Intel will have a chance to regain relevance in several years when the industry reaches the next disruptive inflection point. That will probably be tied to the arrival of a new hardware form factor like AR goggles or direct neural interfaces or something.


Phone chips are hard not least, or especially, the telecom part. I'm thinking that some companies underestimate the effort.

Broadcom also tried and also failed. They came in cocky because they had succeeded in making Wifi chips with a 10-20 people team so thought that 500-ish people was more than enough for cellular even if Qualcomm had 4x that...

I would not be surprised if Intel fell into the same trap.


At one point there were around 14 companies going for the mobile chip market. So it was clear there was going to be a massive casualty rate.


TI was also in the mobile AP space for a while then dropped out


From what I've heard their mobile SoCs were pretty high quality & had superb documentation, running such famous (relatively) open Linux devices such as the legendary Nokia N900. TI dropping out of the mobile game also had a profound effect on the full Maemo/MeeGo saga later on.

I guess even Sailfish OS that came after MeeGo is not really immune to SoC supplier dropping out under it as the first Jolla smartphone with Sailfish OS should have run on a SoC from Sony Ericsson only for SE dropping out of the game as well! And the (in?) famous Jolla Tablet had one of those Intel Atom x86 mobile processor.


It doesn't help that there's also cultural differences. I've been in a similar turf war between an Israeli team and myself as an ex-Israeli on a different team and there was just a huge gap in how things are done. Both teams could have probably delivered just fine but top management is what really messed things up. So really this is probably mostly lack of leadership, and the turf wars are more of a symptom... The lack of leadership is often not just in being unable to resolve/deal/manage a larger org. but also the hiring of more managers who are unable to do this stuff that eventually get promoted to positions where they end up doing more harm.

I'm not sure the engineers themselves having a big ego is a negative. I'd want a team who thinks they can do anything. But how you manage those guys is the problem...


Intel turf wars extend beyond Geos. The 2nd floor and 4th floor of SC12 would tick-tock Xeons and rewrite RTL and verification code just because.


The real action was on the 4th floor. You know, I can tell. :)


And the aforementioned turf war.... then it just corroded over political battles.

I'm no longer surprised, but I'm still always amazed when this happens. It is really crazy how groups within organizations are often unable to see that collaboration is massively more in their interests than minor empire building. It's like they're trying to building a wooden fort a moat filled with mudaround it when they could be building Camelot.

(Well, Camelot minus the whole thing with Lancelot giving it to Guinevere, Arthur to his sister, and his estranged son coming back to kill him. Not quite that Camelot.)


I seem to recall Intel was working on a few mobile chips over the past ~15 years. Or was it all part of the same drawn-out failure?

I also seem to remember shopping around for a new phone a few years back and coming across android phones running on what seemed like a Frankenstein Atom processor. At the time I thought it was a promising concept, though benchmarks weren't flagship quality. But I'd hoped to eventually see dual-boot phones I could slot into a shell of a laptop to have access to a an ultraportable low-end windows box when needed. I'm kind of disappointed that never came about. (I know there's some promising "shells" for android phones, but I'd really like an OS I can fully control for my desktop experience, and Linux phones don't quite seem mature enough)

Anyway, any insight on the above from your unique perspective would be welcome.


This thread reminds me of a lecturer I once met who had a theory based on a Machiavellian scale:

Entrepreneurs typically rate low. They rely on goodwill and reciprocity and great relationships to get things done as they have zero power. Early power comes from getting people to buy into your vision.

Corporate middle / senior managers rate higher because it it generally all about the power.

The interesting insight was that as an entrepreneur, when your business scales it is hard to retain the “low Machiavelly” approach. This is where a lot of the growing pains come from..

Edit: add link https://www.harleytherapy.co.uk/counselling/machiavellianism...


Intel didn't have a real competition for a long time. It gets misguided over its own might eventually.

Turf war and cookie leaking is a problem for all big companies, we are all humans, when there is no external threat, we turns to infighting. After all, why not? This will compensate the missing competing pressure that is also necessary as well.

But unable to have a cohesive strategy and call the shot would be Intel's own downfall. Looking back the past decade, they are on a one-way losing path, without any meaningful expansion. Had they have a decisive leadership, it will definitely fare better right now.


From my reading articles at the time and me waiting for Android phones to become available before the iPhone was even announced I thought all that apple had to do was scale an SOC to their massive production level other peoplehadd already figured out the computing.


Politics at Intel are crippling. Add too that huge egos, arrogance, red tape, terrible managers, all around arrogance, self congratulatory pats on the back, you'd wonder how anything gets done.


Perfect example of why Musk’s no silo rule helps his companies so much.


How much of the blame do you think is Otellini's? I got a really bad feeling about Intel when he became CEO.


100% but the culture was already set then. But letting the finance guys run the shop put a stake through the heart of any engineering soul left


this is sad.. sounds like a lot of good talent went wasted. Based on what you said here, sounds like the problem was not engineering talent but lack of vision/leadership :(.


Corporate mess. I know it, not Intel, but politics, who gets all the credits?, before the first step has been taken - been there.


This is why good CEO's deserve 100x, 200x, or 300x the median salary. They don't work 300 times more than a regular employees, but they are not supposed to. They are there to prevent all the bad things that naturally happen at large companies: turf wars, bloat, focus on process rather than delivery, duplicated work, manual and buggy processes when automation is possible, unfocused investments, mindless chasing of buzzwords, etc, etc. If you don't have person at the helm who's a blend of visionary and dictator, you end up with a good company sliding in irrelevance, like IBM, and maybe now Intel. Those CEO's who keep their star company in their current place definitely deserve the tens of millions they earn.


Do you observe any correlation between CEO compensation and their skills at the above?


The entire world has their eggs in one basket - Taiwan Semiconductor Manufacturing Company. Say whatever about Intel, I am hoping that they get back on their feet in next few years with 5nm manufacturing.

Since when is competition and choice a bad thing? If you're a chip design firm and you have a 8-month backlog to get your masks made at TSMC, what do you do then? TSMC can be world's best fab but we need to address the SPOF condition that is so close to the political hot zone in the APEC region.

Intel should spin off its Fab division as a separate company, hire some top executives to rejuvinate the workforce, make it the coolest Fab company in the world to work for and leap ahead. Intel TMG's Sohail was the biggest borderline criminal executive that was forced out a couple of years ago and was responsible for the 10nm delays.

It is enormously difficult for any startup to get into this space - you need billions and years of expertise to start a Fab. Or may be YC folks can? I worked in the Fab but don't have a birds-eye view of what it takes for a startup to get into this space. It would be amazing to see startups trying to manufacture semiconductors... you know live up to the Silicon Valley's name. You could have billions of dollars of economy built on services - US, EU and most advanced economies are shifting towards services; but they all rely on semiconductors. Imagine something were to happen to that... It is a juicy problem. We don't need Uber/AirBnb unicorns out of SV, we need Fairchild of the modern world from SV.

Edit: To see what kind of a beast EUV litho is, this is a good short documentary on it: https://www.youtube.com/watch?v=f0gMdGrVteI


Fabrication is where Intel is falling down.

They have fresh new chip designs that would be perfectly adequate, but they are still hobbled by 14nm. They have now released three? four? iterations of x86 design on the same 14 nm process, because 10nm is that troubled

If you spun off the fabrication, as AMD did, you would have a much smaller, but more-competitive fabless business, and the fabricator would implode. (Which is pretty much what happened to Global Foundries, AMD's former fabrication house)

In any case, much of the value for shareholders would be eradicated. Intel has historically done very well as a vertically integrated business. They would like to remain vertically integrated.

We may see Intel resort to outside fabrication -- executives have already mentioned it in the business press -- but I very much doubt we will see them discard their internal fabrication capabilities.


I am afraid, they're going to be using US gov as their wheelchair to survive a few more decades: https://newsroom.intel.com/news/intel-wins-us-government-adv...


That shouldn't be surprising.

The Military Industrial Complex is a huge beast. Almost all the big tech companies started before 1970 or so were funded by DoD research money. The only reason silicon valley is where it is lies in DoD radar research constraints. Computer tech started as stuff like solid state transistors to improve radio efficiency and size.

Since the MIC is a good ol' boys club with the tax dollars to pick winners and losers, of course they'll help their friends first "for the good of the country".

I'd also like to note my displeasure that my tax dollars pay must of the research bill, but the various companies walk away with the patents, copyrights, and profits. There's no reason for someone else to get rich off my investment.

Want to solve the national debt? Require royalties for government investment.


The Boeing of semiconductors


War must always be considered, if you want to keep existing as a nation.

People have been repeating "Si vis pacem, para bellum" for thousands of years for a reason.

Semiconductors and aerospace are important to war efforts, and having domestic manufacturing capability is a big deal if you have serious, wealthy, technically advanced adversaries.

Reality is complicated and many-faceted, and there are a number of reasons to keep Intel alive as far as the govt goes: war, staving off brain-drain, keeping a lot of jobs, etc.


Your praise of war comes off quite naive. Perhaps you figure "war" or violence as viable solutions despite being a much more connected world where diplomacy and fairness can be judges more democratically. Such talk of war is neccesary is an utterance of someone born on the right side of the fence and quite clearly no clue as what they are talking about.


Ask Ukraine about the practicality of disregarding defense as a necessary evil.

It’s hardly naive to assume that, since there has always been conflict, that conflict will continue for the foreseeable future.


Take this from an extremely liberal and war-abhorring person: If your argument is that a country should not take steps to secure it's supply chains in the event of a war because <morals or something>, then you are the naive one.


That's why diplomacy works for the vast majority of countries. War is expensive and the costs are always more than anyone ,regardless of how sophisticated they believe themselves, could possibly imagine.

Take that from someone who lost a large chunk of his family currently in an African - shithole as the US president aptly put it- country. I'm on HN because this is the most peaceful time in recent memory. Do me the favour, at the very least, to put a handle on your dillusions grandeur. In our connected world diplomacy should be the only way to resolve national conflicts.


Sure, but countries also need not to be codependent for diplomacy to succeed. Diplomacy and trade are the basis of an international peace that might last centuries, but to work properly it is necessary that countries strive to also be potentially independent otherwise you get global crisis and situations where military threats are necessary to force diplomatic relationships.


So it's gonna be another too big to fail thing again.


In this case it’s a legitimate national security subsidy. Even if it’s perpetually a few nodes behind, the US military needs a high-tech fab company headquartered in America, staffed mostly with Americans, that does at least some of its manufacturing in the geographic United States.


Less too big to fail, more a national security interest in ensuring the US can make chips during a hypothetical conflict.

Great if you’re the company the gov’t blesses for this purpose (like Microsoft) but not so great for their competitors.


I'd probably be nervous about TSMC opening a fab in AZ then


That fab will never open.

Just like Foxconn's plant in Wisconsin.

It'll just keep getting pushed back, and pushed back. Its job is to keep the politicians happy (or at least not angry), and it doesn't need to be built in order to serve that purpose.

That fab will never, ever, ever run a single wafer.


I think in this case it's a little different because of the milsec angle and Taiwan's geopolitical role in "great power conflict" with China.


I suspect it very much will exist, but only once it's a node or two behind. It'll never be bleeding edge.


One of the questions I have is just how much of their relatively recently-gained process optimisation knowledge of e.g. 14+++ will be transferrable to future node shrinks, and, will this give them an IP edge over TSMC if they can get back on track with node shrinking?


People root for anything-but-intel because competition is a good thing and intel had none for very long time.

I don't know where the notion of "Intel is about to die and TMSC will make us suffer due to monopoly" comes from but before having people cheer for Intel again they probably need some time to have their execs cure some disease in some poor part of the world, do a few AMAs on reddit, open-source something, donate to some cool projects in tandem with actually improving their products before regaining the goodwill of the people. Can take some clues from Bill gates and Microsoft.

The consumers are pissed. AMD's comeback is attracting a lot of fanfare because of people being sick of the not having an option beside "should I get i3, i5 or i7" for very long time.


> TMSC will make us suffer due to monopoly

The fears have less to do with TSMC the company and more the fact it's a pawn in the geopolitical chess game between the US (and her pacific allies) and China.

The issues with supply chains through Taiwan have little to do with whether gamers are buying their CPUs from AMD or Intel, but the fact that a critical piece of infrastructure in the Western economy is very precariously located, and in the event of war or political strife is not going to be able to be mobilized for domestic (or even friendly-allied) production.


I agree with you. But I think it's a "can't-see-the-forest-for-the-tree symptom" that is possibly due to the typical HN demographics. It has been a decade or longer that the US relies on China manufacturing for industrial infrastructure that is as (or even more) critical as wafers. It doesn't help fix the problem by simply singling out semiconductors and keeping every other industry locked in China as-is.


All of those have second sources.

5nm silicon is, literally, the only thing that only one company can make.


It would have to be quite a long war where the military couldn’t rely on existing equipment and inventories of chips for a while. It’s not like most military equipment uses the latest CPUs anyway.

And if a war did drag on for years there would be opportunity to switch suppliers and manufacturers or build new infrastructure.


Who said anything about any wars? There are rumors abound about TSMC infrastructure being booby-trapped due to national security concerns. As soon as China reaches 14-28nm and is able to sustain internal needs using home grown foundries an "accident" in TSMC facilities could put mainland in a position of silicon leader for at least half a decade.


Why half a decade? They just have to buy new equipment. The know-how remains.


Some industries take decades to establish, you don't have 5-year stock of CPUs and you will probably be left high and dry if the supply is cut off by a natural disaster or something.

Where do you even get people with the knowledge required to run the 5nm fabs, there night be none at all on thr continent.


And moreover, is going to be instantly destroyed and unrepairable.


Sounds like US screw up to prevent a monopoly which is being broken by an outsider, now complaining that there are no western alternatives to the outsider.

Anyway, the machinery that TSMC is using is produced in Netherlands AFAIK. US should be fine, there's a lot of capital available to buy the machinery, recruit the talent and build the Fabs if it comes to TSMC not being reliable partner at some point.


Critical technologies like optical proximity correction are fab proprietary


That's why your recruit the talent too.


Even if TSMC is western comany, monopoly on bleeding edge process is bad for world. I wish no more fabs (TSMC, Samsung, Intel) drop out from the race.


Tech enthusiasm and criticism has become extremely cyclical in the past decade. It’s easy to forget that we were all cheering for Facebook and the Zuckerberg success story before it became uncool to be pro-Facebook.

Intel is stumbling, but up until very recently they were still the fastest gaming CPUs available. That didn’t stop gamers from criticizing them endlessly and cheering their demise. Intel went from cool to uncool.

Give it a few years and they might be able to leverage the underdog position into a comeback story.


> It’s easy to forget that we were all cheering for Facebook and the Zuckerberg success story before it became uncool to be pro-Facebook.

This is emphatically untrue. There was always a significant minority in "Hacker" and privacy circles that were intensely sceptical of Facebook in specific and "social media" in general.


Dude, I was never cheering Facebook.

The most positive opinion I have ever had of it could be described as indifference.


> leverage the underdog position into a comeback story

The comback story will need Intel to become more productive with CPU manufacturing R&D. Is there any reason why after such prolonged decline Intel may have a comback?


> I don't know where the notion of "Intel is about to die and TMSC will make us suffer due to monopoly" comes from

Semiconductor R&D costs have been going up, companies consolidating or went into niches. There are jokes about having to buy from The Semiconductor Company® in the future.

Intel is sitting on a lot of money, so they're not going to die soon. But they haven't been showing signs of improving the situation since 14nm was delayed in 2013. This saga didn't start with 10nm. Remember that the cadence used to be tick-tock every 18 months. We still can't get 10nm desktop chips. So there are some questions how they'll fare in the future.


I for one am very glad that TSMC amounts to a SPOF. I've been to Taiwan twice, and I'd much prefer to use an old chip a while longer than for this lovely, open nation to be invaded by a totalitarian regime. There is only a few things preventing an invasion, mainly that the US would like to keep this "unsinkable carrier", and TSMC. As Xinjiang shows, only international pain will do to make the world react to China's actions, and TSMC might be able to cause that.


Samsung is not that bad. They just announced a 5nm chip; it's not as good as TSMC but it's not as far behind as Intel either.


Right, don't underestimate Samsung - they have top tier fabs and manufacture a lot of chips. If the world has all their eggs in the basket of one company that company is ASML.


Right, so our eggs are not all in one basket, but in two. One claimed as rebel territory by the CCP and one still technically at war with a nuclear-armed hermit kingdom.


> one still technically at war with a nuclear-armed hermit kingdom.

If we're going down that route, every country in the world can be described in an equally alarmist manner.


This is an absurd amount of reductionism. The chances of actual military conflict in both the Korean Peninsula and Taiwan are objectively much higher than most other places around the world. Having both of the top tier semiconductor manufacturers reside in such a hotspot is absolutely a cause for concern.


Yeah but most of them are more than 300 miles from Xi Jinping.


Fortunately ASML is in the Netherlands.


I was obviously referring to Samsung.

As for ASML, if it were the key to Intel's woes, the article we are all commenting would not have been written.


> As for ASML, if it were the key to Intel's woes, the article we are all commenting would not have been written.

It not being the issue discussed in the current article doesn't mean it isn't a critical issue.


At risk of entering Rube Goldberg territory, I suppose we could speak of two baskets hanging by a common thread.


So possibly a couple meters under even current sea level ? ;-)

(Continuing the sky-is-falling theme. :) )


Completely agree. The world's supply of semiconductors is dependent, in some ways, on the benevolence of the CCP. TSMC is like Arakkis. We need to diversify the supply chain.


TSMC is a Taiwanese company. The world can ensure the supply of its semiconductors by ensuring Taiwan has access to weapons and guarantees of military assistance in case the CCP's benevolence wavers.


China claims that Taiwan is a renegade province.

When (not if) China acts on that, there will be trouble.

China has strong infowar capability and is developing it quickly. Old-fashioned spy-vs-spy type warfare works, too, but is less visible.


If China decides to take Taiwan by force, there is very little the rest of the world can do to stop them.


Nazi Germany attempted to take tiny Malta from the Britsh for two years and failed.

The Japanese held many untenable defensive positions in the Pacific for months.

Unless China is willing to nuke Taiwan, (what would be the point?) a military offensive to take over Taiwan would be extremely unlikely to succeed within a few months - and assuming the US/Australia/UK/Japan/South Korea provide support - almost impossible.

Doing any kind of amphibious landing offensive is incredibly costly and difficult.

China's best hope would be that Joe Biden is under CCP control.


None of those other islands were even remotely close to the strategic value of Taiwan.


Will do? Sure. But can do? Sanctions, war, ...


And, uh, based on the diplomatic recognition situation it's pretty clear that Teh World has considered your suggestion and very explicitly rejected it.


Isn't TSMC building a fab in AZ? Anyway I'd say the "spice" in semiconductor production is rare earth elements, since the CCP controls most of their extraction (and has artificially lowered exports in the past)


> Anyway I'd say the "spice" in semiconductor production is rare earth elements, since the CCP controls most of their extraction

That's really only because of other countries outsourcing to China and not supporting domestic mining. There is no shortage of rare earths in other countries.


That's very true. European countries "go green" by exporting their mining pollution to China. They can then blame China for all the pollution problems.

Brilliant really. Get political points for "lowering emissions", then double dip getting more points for "being hard on China" while at the same time not hugely affecting the economy, jobs, or ticking off industrialists.


> Isn't TSMC building a fab in AZ?

Yes, but the planned chips are an older generation right now and they'll be even older when the fab actually starts production. TSMC would never move cutting edge production outside of Taiwan.


It’s just such nincompoopery, people speculating about geopolitics. TSMC is owned by normal people who want to make money, if for some reason they aren’t able to trade, ownership will move out IP and put it on a USB key, and build elsewhere.

Just think about it rationally. If TSMC isn’t allowed to meet demand, well of course someone will be incentivized to move the IP elsewhere where you are permitted to meet demand. It would be such a colossal opportunity. Indeed shutting off TSMC would be great for other people interested in getting into chip manufacturing, in the same way that reducing competition in anything benefits producers.

The reason that doesn’t happen has nothing to do with geopolitics and everything to do with what every mainstream US politician has always been saying: subsidies, and the price of labor.


This is a naive take on Taiwan's status and China's attitude towards it. The simple fact is TSMC being in Taiwan greatly incentivizes western powers to protect Taiwan from Chinese aggression. Everyone knows it, especially the Taiwanese government.

> Just think about it rationally.

Indeed.


No, they're saying they're building a fab in AZ.

Like Foxconn is (still!) saying they're building a fab in WI.


> Since when is competition and choice a bad thing?

Who is saying this? There is competition, and Intel is starting to lose.

> If you're a chip design firm and you have a 8-month backlog to get your masks made at TSMC, what do you do then?

Wait 8 months. This is a lot better than getting a "Lol, we don't do custom masks" from Intel.

> Intel should spin off its Fab division as a separate company, hire some top executives to rejuvinate the workforce, make it the coolest Fab company in the world to work for and leap ahead.

Yes, they absolutely should do that. The only question is if they have the guts to do it.


China Invading Taiwan would be the IT worlds covid moment.


Tick-Tock-Tick-Tock-Tock-Tock-Tock-Tock ....


Maybe this is a bit of a side topic, and I'm a layman, but I'm interested to know why the M1 chip is able to perform so surprisingly better? And why Intel or others can't just come up with a similar design, if it's clear from the specs/die images what's being done?

Is it years of work to figure out the placement of CPU/GPU/memory units on the die, routings, and come up with new instruction sets? I imagine this wasn't new physics, Apple was just using contract fabs, right?

What's the secret sauce that let them pull this off, and fend off others?

Edit, oh, this is somewhat answered below in https://news.ycombinator.com/item?id=25093313


Apart from cache, anandtech notes https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...

  What really defines Apple’s Firestorm CPU core from other designs in the industry is just the sheer width of the microarchitecture.
"Width" enables greater parallel execution of instructions.


One of the interesting comments in that article was how they pinned limited width of x86 decode implementations on variable instruction length. There are obvious code density benefits to the x86 variable length approach (especially in immediate encoding), but I guess the need for realignment creates long critical paths on the frontend. I wonder if the more constrained variable instruction length of RV64GC (32- and 16-bit instructions only) will be able to similarly scale up to 8-wide instruction decode like Apple has been able to do with AArch64.


64bit x86 squanders it's variable length advantage by being a 40 year old design that has been extended over and over again.

Much of the opcode space is wasted with old single byte instructions that are rarely used these days. There are REX prefix bytes required everywhere to access all 16 registers. Modern instructions that are used all the time are hidden away behind prefix bytes.

64bit ARM is a complete redesign of the ARM instruction encoding, and they put a lot of thought into using the instruction space optimally. 32bit ARM also wasted a lot of it's instruction encoding space, but 64bit ARM is a massive improvement. Despite requiring roughly 10% extra instructions on average, the average arm64 is around the same size as the average x86 binary.

Immediates aren't a huge problem. All 32bit immediates and many 64bit immediates can be encoded in 1-2 instructions. Anything that can't should probably just use a PC relative load.

IMO, the much simpler instruction decoding massively outweighs the need for slightly more load bandwidth.


I've heard that there's other benefits, like the fact that memory RMW instructions are essentially a fairly clean way to address physical registers without using any architectural registers.

I'd love to see what a CISC-V ISA without the 40 years of baggage looks like (like jeeze, the hlt instruction gets a single byte on x86?)


Agreed. AMD didn't throw enough away when they moved X86 to 64bits. They also didn't add enough registers. Variable length CISC is fine. They should steal the bit manipulation instructions from ARMv8 and the vector extensions from RISC-V.

They won't.


"Variable length CISC is fine." Well, actually, the Anandtech article points out:

"Other contemporary designs such as AMD’s Zen(1 through 3) and Intel’s µarch’s, x86 CPUs today still only feature a 4-wide decoder designs (Intel is 1+4) that is seemingly limited from going wider at this point in time due to the ISA’s inherent variable instruction length nature, making designing decoders that are able to deal with aspect of the architecture more difficult compared to the ARM ISA’s fixed-length instructions."

Given the complexity of x86, it's amazing that Intel+AMD have gotten 4-wide decoders. But the M1 has 8-wide and if they want to go wider, it's linear rather than quadratic in complexity.


I think they did a fine job considering all the constraints. The ISA was created in 1999, but the first widely used 64-bit OS (vista) didn't release until 7 years later. In truth, I doubt 64-bit systems became more than half of all systems until a couple years after the release of Window 7 in 2009 (a full decade later).

Every transistor used for x86_64 that couldn't also be used for x86 was a competitive liability (increasing size, power, R&D, etc without any real payoff). I think their decisions make a lot of sense given all the constraints.


I agree that they did a fine job given the constraints at the time. But by getting rid of x87, ... they would have used less size, power, R&D. SSE was circa 1999 and they could have just supported that.

They being AMD. The counterargument being that if AMD had been too aggressive, Intel could have done something more conservative. However, then there's the cross licensing agreement ... Arggh.


They made SSE and SSE2 extensions part of the core instruction set in AMD64.

But they didn't remove x87. It's not really used, compilers only emit it when code asks for a long double.

Personally, I do think they should have banned x87 from 64bit code. But it wouldn't have allowed them to remove the x87 units from the chip, as every AMD64 chip to this day still supports 32bit compatibility mode, and regularly uses it.


A code density prioritizing modern ISA is something that I've thought a lot about too. Would be an interesting possibility in the embedded space. I think that there are certain tricks that can be used to speed up the realignment problem, too.


Interesting point. I guess as cache sizes go up instruction density becomes less important. Also Arm abandoned Thumb - which was relatively easy to decode - for AArch64 and I guess they must have done quite a lot of analysis before doing so.


Thank you for the link. That article says the cpu has a 630 op reorder buffer. Does that mean it has to throw out up to 630 instructions on a mispredict? That sounds huge, so I wonder if I’m misunderstanding.


Imagine a cpu that executed this sequence of instructions:

    r1 = r2 * r3
    r2 = r3 * r4
    r3 = r4 * r5
    r4 = r5 * r6
On a first glance it might seem that the execution order matters because if you reorder lines in this source coxe the meaning indeed changes (e.g. the first instructions depends on an input in r2 but r2 gets a new value in the second instruction).

But if you rewrite the names of the registers (variables) and use a larger set of registers (variables) then you can express the same semantics without overwriting any registers. If you do so you can run the operations in parallel on different execution units within the same core.

The ability to run multiple functions at the same time is called "multiple (or wide) issue".

There are different kinds of functional units inside a cpu. For example, some can do basic arithmetic but not divide. At any given time, some functional units will be busy or free. The amount of effective parallelism present in the input stream you can effectively use is bound on whether you can successfully map your instructions into functional units that are free at that time.

Often, you while cannot run a given instruction because for example the multiplier unit is busy, the next instruction could be run because it depends on the divider functional unit, which at that time is free.

The ability to execute instructions out of order is called, well, "out of order execution". It depends on the ability to rename registers and keep track of actual data dependencies. The cpu needs some memory to keep track of that data.

It's unrelated pipelining (which I could give a try explaining too if anybody cares to hear a similar explanation)


Nice explanation. Wouldn’t mind reading more.


I'll try. On pipelining:

Most operation above a certain level of complexity can be broken down in individual smaller steps that have to happen in order and each step takes some time. Once one such step is done, the next step is performed etc.

If the logic for each of those steps is implemented in hardware, once a step is done executing, the logic just sits there idle, waiting for the next operation (composed of smaller steps) to be executed.

Pipelining is a technique that allows to make use of that otherwise idle logic to start performing the first step of the next operation.

For example, let's consider this small program:

    r3 = r1 + r2
    r2 = r3 + r1
    r1 = r3 + r4
A hypothetical CPU would execute this instruction in 12 clock cycles:

    load instruction 1
    decode instruction 1
    perform r1 + r2
    write result in r3

    load instruction 1
    decode instruction 2
    perform r3 + r1
    write result in r2

    load instruction 1
    decode instruction 3
    perform r3 + r4
    write result in r1

A pipelined execution looks like this:

    load instruction 1 
    decode instruction 1 | load instruction 1  
    perform r1 + r2      | decode instruction 2 | load instruction 3
    write result in r3   | perform r3 + r1      | decode instruction 3
                           write result in r2   | perform r3 + r4
                                                  write result in r1
and completes in 6 cycles. Once the pipeline is running at full capacity, it produces 1 result per clock cycle, achieving a 4x speedup over the naive approach above.

This example architecture has a pipeline depth of 4.

The execution is still strictly in order. The idea is that you can start doing some work for the next instruction before the previous one is fully completed.

For example, you can decode the next instruction after you decoded the current one. Also, you can perform a computation on the next instruction as soon as you have computed the results of the previous one, even before you actually have written the results in the actual destination register, provided there is additional logic that ships the result of the current operation as an operand of the arithmetic unit for the next cycle.

The maximum duration of each step is bound by the clock period. The faster the clock, the less time you have for a single step. The deeper the pipeline, the faster the clock can tick and still produce one operation per clock tick. We'll see next some of the many things that prevents us from just riding this idea to the extreme and deepening the pipeline to ludicrous amounts and harness ludicrous clock speeds.

You may have noticed that this dummy architecture here has been carefully designed so that the next instruction can consume the result of the previous instruction if the instruction stream so requires.

Imagine a different architecture that is divided in 6 steps, where the + operation itself is pipelined in two steps.

    load instruction 1
    decode instruction 1
    access r1, r2
    start +
    continue +
    write result in r3

    load instruction 2
    decode instruction 2
    access r3, r1
    start +
    continue +
    write result in r2

    load instruction 3
    decode instruction 3
    access r3, r4
    start +
    continue +
    write result in r1


    load instruction 1
    decode instruction 1 | load instruction 1 
    access r1, r2        | decode instruction 2 | load instruction 3
    start +              | access r3, r1        | decode instruction 3
    continue +           | start +  (HAZARD!)   | access r3, r4
    write result in r3   | continue +           | start +
                           write result in r2   | continue +
                                                  write result in r1
Now, here the second instructions depends on r3 which is has not yet finished computing! This is known as a data hazard. A common way out is to introduce a pipeline "stall", i.e. a no-operation step is injected in the pipeline to resolve the hazard.

    decode instruction 1
    access r1, r2        | decode instruction 2
    start +              | access r3, r1        | decode instruction 3
    continue +           | nop                  | nop
    write result in r3   | start +              | access r3, r4
                           continue +           | start +
                           write result in r2   | continue +
                                                  write result in r1
This short stall is not a full pipeline flush. Ok, so we saw a data hazard, but so far all this has been pretty straightforward.

So far we were looking at a linear instruction stream. Let's see what happens when you have a branch:

    r3 = r1 + r2
    call func1
    r1 = r3 + r4
where func1 is:

    r2 = r3 + r1
    return
Let's see what our dummy 4-stage pipeline machine would do:

    load instruction 1                   
    decode instruction 1 | load instruction 2  
    perform r1 + r2      | decode instruction 2    | load instruction 3
    write result in r3   | perform pc + 1          | decode instruction 3 (HAZARD!!)
                         | write pc=func1, ra=pc+1 | perform r3 + r4      (^^^^^^^^)
                                                   | write result in r1   (^^^^^^^^)

calling a function basically means setting the program counter (aka instruction pointer) register to point to a different location instead of the next instruction. In order to return from the function you also need to save the return address somewhere (some CPUs use a "link" register, some use the stack, here I used a return address "ra" register); the return address is the address of the next instruction in the instruction stream before jumping to the function body.

As you can see, once we set the program counter and tell the CPU that the program flow continues from another place, the work the CPU has started doing in the pipeline becomes invalid.

    load instruction 1
    decode instruction 1 | load instruction 2
    perform r1 + r2      | decode instruction 2    | load instruction 3
    write result in r3   | perform pc+1, &func1    | nop
                         | write pc=func1, ra=pc+1 | load ins func1.1
                                                   | decode ins func1.1 | load ins func1.2
                                                   | perform r3 + r1    | decode ins func1.2
                                                   | write result in r2 | read ra
                                                                        | write pc=ra
                                                  
Jumping to a different address thus incurs in additional delay, because we don't know yet where the jump is taking us until we have decoded the instruction and performed the necessary control flow adjustments.

In our dummy example this adds 2 cycle latency. Not all is lost since we can still do a little bit of pipelining, but for all intents and purposes this is a pipeline flush.

Some architectures (especially the older RISCs but it's still quite common in DSPs) work around this problem by introducing one or more "branch delay slots"; these are instructions that appear physically after a call/jump/branch instruction but get executed before the branch is actually taken.

A equivalent program for such an architecture would look like:

    call func1
    r3 = r1 + r2
    r1 = r3 + r4
where func1 is:

    return
    r2 = r3 + r1
And be executed as

    load instruction 1
    decode instruction 1    | load instruction 2 
    perform pc+1, &func     | decode instruction 2 | load ins func1.1   |
    write pc=func1, ra=pc+1 | perform r1 + r2      | decode ins func1.1 | load ins func1.2
                              write result in r3   | read ra            | decode ins func1.2
                                                   | write pc=ra        | perform r3 + r1
                                                                        | write r2
By avoiding the pipeline flush we can now complete the same task in 7 cycles compared to the 9 cycles shown above. Branch delay slots are no panacea though and modern CPU have largely moved away from them, favouring other techniques to predict whether and where a branch is likely to happen. I choose to mention branch delay slots in order to illustrate that pipeline stalls are not inherently necessary whenever branches are involved.

Stalls are ultimately caused by data dependencies. The program counter is just data, on which the instruction stream itself depends on.

(In the real world obviously things get much more complicated but you get the idea)


Does this also refer to physical die width? As in, it also makes the manufacturing more expensive as yields go down due to any small defect?


No, the size of the die isn't really related to the internal structure of a CPU core.


I would quote Orwell here: "Who Controls the Past Controls the Future".

It is about lack of control over legacy killing the progress. The power of Intel is "proven" legacy, but building completely new architecture is expensive and prone to adoption failure (Itanium?) when Intel has no control over OS (Windows), apps or virtualization software.

Then Apple has control over all that: they can make hardware (scaled up mobile HW), change their OS to support it (iOS bits), support past apps (Rosetta), enforce support in new apps (compiler, LLVM, certification).


Itanium was a really bad architecture for the time (and possibly still is). It was a bet that sufficiently smart compilers would be available, but they weren’t available in time. They aren’t even available today 20 years down the line in a form that would have justified Itanium if they were available then.

It wouldn’t have mattered if Intel controlled the OS or not - it just wasn’t competitive. There’s nothing you can learn from it about Apple - who delivered an ultra competitive, likely even when emulating Intel architecture in software.


I think that the ARMv7 -> ARMv8/Aarch64 move is interesting in this context.

Arm has swapped a large part of its application user base from 32 bit to a completely incompatible but more modern 64 bit ISA and Apple has cast off the 32bit legacy completely due to its ability to control the whole stack.


Apple is now Full-Stack. It's turtles all the way down to the CPU!


This is not really the case. Most Intel servers run linux, Intel has nearly complete control over linux in a practical sense. They can put developers in it and update the OS and unless its a very user hostile feature it will go into the mainline kernel and be everywhere. The same goes for compiler and so on.

Intel could easily open up their boot and make the store there much better.

The don't quite have the vertical control of Apple, but to say they are fundamentally hindered by their legacy is quite questionable.

The fucked up improving their manufacturing.


Unlikely. There is more Linux devices running as Android on ARM than any other processors. You I can run Termux on ARM Android with complete set of of GNU tools. Also ARM is supported natively by Debian and Ubuntu [1].

Once devs will sort out their new shiny ARM Macs whoes (Docker, other tooling) I smell there may be ARM server market rising soon.

[1] https://help.ubuntu.com/lts/installation-guide/armhf/ch02s01...


Amazon already offers EC2 A1 (ARM64) instances available, and I'm sure Nvidia's purchase of ARM is going to bear fruit in the data center.

AMD's Epyc chips are pretty awesome though too so I don't think x64 is on it's death bed.. but it should be an interesting couple of years watching this space heat up.


Intel's current design are 1-3 years late depending on which roadmap and sector you are counting. Caused by their 10nm delay.

Intel had Tigerlake / WillowCove Design but they cant ship it. They also have Golden Cove. If you have put the design they were suppose to be using now, on a IPC basis Intel might still be in the lead.


It has really been cascading process delays dating back to 2014 when 14 nm got pushed back when Broadwell was supposed to launch, too.


What takes time to bring a chip to market? As in the question, if you know exactly what needed to be designed (in concept), how long would it take you to have a fab turning out units?

Is it the work to figure out how to lay down the component blocks on the die and all the circuit routing? Is it figuring out how to assemble the power-efficient components from the Arm library? Does that take years? Is it the instruction set? Is it the planning for manufacturing capacity?

I'm just curious to know what the actual work and division of labor / time is to "make a chip".

I would love to know for example:

-- Initial design, 10 people, 1 year

-- Prototype runs, 5 people, 6 months

-- Final design, 5 people, 1 year

-- etc etc.

-- Acquiring fab capacity / manufacturing hardware, 10 people, 1 year

-- etc etc.

But sounds like there's no way Intel or others can just "leapfrog" even if they knew what needed to be done and could abandon their prior choices.


>But sounds like there's no way Intel or others can just "leapfrog" even if they knew what needed to be done and could abandon their prior choices.

At best It take one year to build a Fab. It takes another one just to plan to built it. That is assuming you get the regulation approval with zero delay.

Building a Fab means absolutely nothing if you cant get the process to yield. Look at Samsung Foundry in the past 3- 5 years.

Intel's Leapfrog chances was their Intel Custom Foundry. Had they not mess that one up they would have a Fighting chance. Even if they had a Design that Leapfrog Apple's M1 today, it would still be let down by their 10nm SuperFin Process.


What about other ARM chips?


Other ARM chips are aiming for smartphone/tablet and are more than enough for this. No one using samsung latest top phone is thinking "if only that phone was 15% faster ! this is a major issue".

No one makes ARM chips for laptop, because of the chicken and egg issue. Windows on ARM going mainline could change it.


Amazon’s Graviton2 (originally from Annapurna Labs) is apparently delivering significant efficiency on the server side. Laptops will be there soon enough even without windows.


Hopefully without windows. It would be nice if we could get our own little *nix corner.


We've seen some of this already, for example with Microsoft's SQ1/SQ2 SoC manufactured with Qualcomm for the Surface Pro X series of laptops. They already emulate x86 well enough to run apps compiled for 32bit x86, and they should be coming out with 64bit support soon.

Once Microsoft is shipping these layers within Windows, it's only a matter of time before the use of ARM CPUs in more laptops takes off.


Does windows on ARM support emulation of x86 apps? Probably the biggest advantage for windows right now is tat you can run apps from 20 years ago perfectly fine, and of course the large library of playable games built over the years.


So basically given the same process, all companies have almost a similar offering?


Disclaimer: I’m a chip design enthusiast and not a professional, but this is how I understand things right now.

If you read the anandtech article describing the A14/M1 chips, it seems that the ARM ISA lets Apple do some things that are either exceptionally hard or impossible on x86, like making individual cores much “wider” in terms of instruction decode, allowing the A14/M1 cores to do significant parallel execution. This allows their cores to be significantly more efficient in terms of instructions per clock cycle. To be clear this isn’t just the benefits of ARM - Apple is far ahead of other ARM designers as well. On the other hand, the wider the execution pipeline, generally the slower the cores have to be clocked (Im actually not sure why this is, to be honest). This is part of how Intel is able to produce chips in the 5.0+ GHz range.

So it’s not that every designer converges to the same design (internally, they’re quite different). Instead, what’s happening is that all the relevant trade offs (pipeline width and depth, ISA, fab tech, power targets, etc) are leaving competitors within an order of magnitude in performance. And the question of who’s going to take the lead is a question of who’s executing best on all these fronts, and who’s currently got a lot of talent working on areas that will see continued improvement.

But whatever happens, I wouldn’t bet against Apple. Their team has been delivering improvements like clockwork, year over year for quite a long time now.


I found this very interesting, and it seems a similar thing is happening to Intel in the DIY/gaming space right now. For a long time, the Intel i5/i7/i9 were the processors of choice for gaming, but AMD's Ryzen line has been making remarkable inroads, with the recently released Ryzen 5000 series boasting performance that competes or tops Intel's top chips in the space, and generally with lower power draw to boot.

I'm not a tech journalist, but the way they seem to have done this was exactly what was described in this essay. AMD brought out "chiplet" designs, initially at the low end, but with significant performance gains generation over generation. Combined with simpler motherboard slot conventions, delay's in Intel's next-generation processes, and overperforming Intel in areas like streaming, the value case for buying AMD processors in this space has grown to the point where you would find very few enthusiasts who would strongly recommend Intel over AMD if you're building a new system and wanted the best performance at your price point.

Seems the infighting issues described by another poster have not only affected Intel's competitiveness in the mobile space.


>the Intel i5/i7/i9 were the processors of choice for gaming, but AMD's Ryzen line has been making remarkable inroads

I just checked my go-to hardware recommendation site, Logical Increments[0], and not a single Intel CPU makes the list. AMD everything.

[0] https://www.logicalincrements.com/


The new Zen3 series crushes the top of the line Intel desktop CPUs.

I'm on the 3900XT, one of the last from the Zen2 line - it's a ripper, yet I can also just drop in a 5900X or 5950X without having to change the motherboard.

Notwithstanding the many Intel security issues where the remedies nerfed performance, between Apple and AMD, I can certainly see Intel in a bit of struggle.


The idea it's recommending a 3600 over a 10600k for a thousand dollar build tells me there's a slant...


The 10600k is $100 more expensive and they're putting that money into the gpu instead. That makes sense to me.


The cheapest 3600 in stock is $80 cheaper than the cheapest 10600k from the same retailer.

And they've has offered the 10600k with a $20 rebate on motherboards for months now...

$50 should come straight the case which is totally overshooting for a budget PC (I recommend the 300L here)

That leaves you with a 10 dollar difference for an CPU that competes with the i9 in gaming...

-

And by the way, even the next build up chooses a 3600X over the 10600K despite the former even beating the 3700X in gaming benchmarks.

The 10600K is an insane value for gaming, and still fast enough for non-gaming tasks. Seeing as they're listing "heavy gaming" as a measurement on these builds it doesn't make any sense for at the very least the i5 to make an appearance.


Well, but you want to spend money on the GPU if you care about gaming.

Consider these two builds:

1. Ryzen 5 2600 ($150) + RTX 3070 ($500) for $650

2. i5-10900k ($280) + RX 5700XT ($330) for $610

The first build with the RTX 3070 is far better. The RTX 3070 buys you into 1440p high/ultra @ 90+ FPS, or 4k med/high @ 60+ FPS. Will the 2600 bottleneck that GPU? Maybe, but you still get better performance for the price by investing in the GPU versus the CPU.


Who said anything about cutting GPU performance?

Also your comparisons are quite poor... 2600 shouldn't even be in the running here, the 3070 would be held back from doing the thing it does best, high FPS 1440p (in actual games mind you, not just CS:GO)


I mean, you said I should waste my money on a CPU upgrade that would make no difference in 1440p gaming. That's money I can't spend on a better GPU, which would make a difference in 1440p gaming.


You realize you're mentioning a 2600 right? Which will place a tremendous limit on 1440p gaming?

My suggestion over the included build is spending $10 more on the build and getting a 10600K.

Silicon Lottery has found 100% of 10600ks will do 4.7 Ghz sustained all core.

Even at that number it will easily out perform a 3600 in a meaningful way. Over 70% of them do 4.9 Ghz which where it starts to reach i9 levels of performance in gaming by the way...

-

And the cherry on top over the 3600 is you can actually buy the i5 outside of Microcenter. Microcenter is the only place carrying the 3600 for $180... but they also have 10600k on perma-sale for $250.

Meanwhile outside of Microcenter the 3600 is rarer than hen's teeth while the 10600k is widely available at $270. I happen to have 5 microcenters within an hour or so of me, but most people don't have that luxury.


A 2600 + RTX 3070 will outperform a 10600k + 5700XT in 1440p gaming. You're way overestimating how much a 2600 would bottleneck the GPU.


Making a wrong statement confidently doesn't making true.


Benchmarks:

Ryzen 5 2600 + RTX 3070[1]

    80 FPS - Ghost Recon: Breakpoint 1440p / Very High
    78 FPS - Horizon Zero Dawn 1440p / Ultimate
    72 FPS - Red Dead Redemption 2 1440p / High
    115 FPS - Death Stranding 1440p / Very High
    220 FPS - Doom Eternal 1440p / Ultra
    124 FPS - Resident Evil 3 1440p / Max
    75 FPS - Gears 5 1440p / Ultra

i7-10700k + RX 5700XT[2] (I couldn't find benchmarks for an i5-10600k with this GPU from the same source. The 10700k should be as good or better than the i5 though)

    67 FPS - Ghost Recon: Breakpoint 1440p / Very High
    73 FPS - Horizon Zero Dawn 1440p / Ultimate
    70 FPS - Red Dead Redemption 2 1440p / High
    111 FPS - Death Stranding 1440p / Very High
    185 FPS - Doom Eternal 1440p / Ultra
    104 FPS - Resident Evil 3 1440p / Max
    77 FPS - Gears 5 1440p / Ultra
1. https://www.youtube.com/watch?v=MkQuyRIbWpI

2. https://www.youtube.com/watch?v=hzSxytXsiTc


Those numbers are hilarious, there's a reason you found them on some random Youtube channel with no methodology or explanation...

I hopped on Gamer's Nexuses 3080 review just to get a number for a 5700XT and an i7 and Horizon Zero Dawn comes in at 10 more FPS: https://www.gamersnexus.net/images/media/2020/rtx-3080-fe/hz...

You can spot check the other numbers and find similar discrepancies...

Of course the reason I can't find a proper reviewer doing this exact setup is because the idea of buying a 2600 for a new build doesn't make any sense period...

Even the i3-10100 beats it in gaming, and that costs as little as $100...


You're missing the 5000 series Ryzens, which they just updated this for.

The difference is key.


I'm not seeing any listing for 5000 series in the 1k price range, and the 10600k is actually in stock at or below MSRP...

Meanwhile it trades blows with the 5600X unless you're making a 7zip decompression and Cinebench rendering mule...

I'm guessing that's why they're not recommended? Being able to actually get the CPU is pretty important...


5600x is in the same price range and a no-brainer, once stock stabilizes.

There's no sense in intel right now when doing a new build.


> Once stock stabilizes

Giant asterisk during a global pandemic no?

And that's paying $50 more than what is apparently already too much for almost equivalent gaming performance (slightly better after overclocking and the motherboard and cooler priced in above support that too)


Even the first generation of Zen was not the low end. The low end of x86 is well below anything whose name starts with 'Core' or 'Ryzen'.

Ryzen came in the door at #2 in a market that goes well beyond #10.

The motherboard conventions aren't simpler, just longer supported.

AMD is doing impressive work with Ryzen, don't get me wrong, but they've already been #1 in x86 performance before, in the mid-aughts. Before and after that, their flagships were #2 in x86.


Well, AMD is marketing they have 20% YoY perf enhancements in the pipeline for the next 5 years. They also developed an internal org structure for developing 3 generations in parallel across teams. I would say AMD has high hopes because they have realigned themselves to being successful rather than the corporate rot Intel is suffering


Do you have any links to read about "developing 3 generations in parallel"?


As another poster mentioned, AMD had a much better price / performance ratio in the mid 2000s too.

I worked for Intel then... The problem was AMD couldn't compete on volume. An individual could go buy an AMD chip at a great price, but AMD couldn't fill a large order from a PC manufacturer. Furthermore, we suspected their margins were too slim to keep investing in themselves.

Remember that DIY pc building is a niche. Something that makes perfect sense when you're building a single PC may not make sense when you're making thousands of laptops.


There’s just one missing detail here: intel was terribly, terribly bad at executing its plan. Tiger lake was supposed to be here on 10++nm or 7nm or whatever much improved process, which would make it extremely competitive if not the absolute best. Intel very well knew they’re in trouble if the expected process upgrades don’t work and they didn’t. It’s not like folks there don’t know how to plot that core vs Apple silicon performance graph.


Did they know that? Like, did the executives know?

They should have been suffocating their R&D with cash, but has that happened? Instead Intel is the world leader in buying companies, dissolving them and EOLing their products. I think on more than one occasion, they bought a customer hardware company and refunded every buyer of whatever gadget that company made.


Naaaah, instead of that they've decided to get rid of senior employees with huge generational knowledge.

[0] "Proportionately, employees in their 50s were three times more likely to lose their jobs than workers in their 30s, according to a document obtained by The Oregonian/OregonLive that tallies every Intel employee in the United States."

[0] https://www.oregonlive.com/silicon-forest/2015/08/intel_layo...


What's interesting is that Apple apparently plotted that graph too. They knew exactly when that crossover was going to occur, for years now.


They certainly didn't expect the massive Intel fuckups from the last years. With the normal performance growth curve they would still be ahead.

* Broken 10nm and 7nm fab process. No end in sight. Even IBM could surpass them.

* Even with much stricter memory guarantee orderings they managed to break caching and C3 kernel/ user seperation security (Spectre, Meltdown, ...).

* Temperature management. Eg AVX512!

* The government "mandated" backdoors.

Who would have expected such massive Boeing-like fuckups from the industry leader?


Well, "expect" might be a strong word, but not only have Intel's CPU timelines been routinely slipping for years, the generational improvements have been kind of pedestrian for years (relative to Intel itself when you go back to the early 2000s). Intel has been behaving like they have no real competition for over a decade, and what they were shipping became their "normal performance growth curve" -- I don't think they took either Apple or AMD seriously.


I don't understand this ridiculous conspiracy. What competent government would force one company to backdoor their products but forget about their other competitors, all of which are domestic?


You believe AMD and Qualcomm don't have similar backdoors? Naive


> What competent government

Let me know when you find one.


Just because you think the federal USA doesn't have one, doesn't mean they don't (or cannot) exist. Lots of goverments in northwest-Europe are generally competent.

Of course a generally competent government probably wouldn't decide it'd be a good idea to backdoor CPUs, but that's another issue.


The EU is trying to get rid of encryption right now ... So they're just as incompetent.


We don't need an existence proof, we need a proof of impossibility.


Even if they had seen two process shrinks more, the relative efficiency benefit of that wouldn’t have topped 30%, which still wouldn’t make intel chips competitive with the m1. What intel needs is both a better fab process and a new core architecture. Apple might have waited one more year or maybe two at the most to get the same performance delta, but they would have switched regardless.


This is a controversial opinion, but I don't agree with the idea that big corporations should make an entry into every single adjacent market. If the company has core expertise and advantages that can deliver a decisive win in a known-lucrative market, then it's a no brainer. But if that's not the case, why run around chasing shadows everywhere, instead of focusing on harvesting the markets where you're already winning.

"But if you don't expand into other markets, you'll eventually die!" Sure, but that's not a bad thing. The purpose of a corporation isn't to continue existing in perpetuity, but rather, to maximize returns for its investors. You found a gold mine? Great, go mine it until it runs dry, then close up shop and call it a day. There's no need to spend half your profits on a wild-goose chase for other gold mines (blackberry investing in self-driving cars anyone?) - if that's what I as an investor want, I can just use my dividends to invest in other companies or startups that are aiming to do exactly that.

I'd rather invest in a company that sticks to the things that it can reliably excel at, and direct its profits back to me, so that I can then invest it in other companies that I trust in other fields. A company that is spending half its profits trying to be something it's not - that's my worst nightmare as an investor.


> The purpose of a corporation isn't to continue existing in perpetuity, but rather, to maximize returns for its investors.

Society grants a group of people immunity from personal liability when they form an organization that has some chance of carrying out some useful work. The organization becomes a special kind of entity - a corporation. It can operate without its investors & officers fearing that their own wealth is on the line should something go wrong. Societies do this because it is widely believed that it would be impossible for various things to happen without it. That is, if personal liability was always in play, nobody would organize human organizations to try to accomplish difficult/dangerous tasks, and the same might also apply to unlikely-to-succeed-but-potentially-significant efforts.

That's what the purpose of a corporation is, not maximising returns for its investors, which is a recent idea (dating back to the 1970s) and has nothing to do with why corporations exist.


True but offtopic or supportive of parents point, which is that running off on tangents is not the purpose of the corporation.


By 2020, the "purpose of a corporation", both in the general sense of "what are corporations for?" and the specific sense (e.g. "what is Intel for?") has become fairly unclear.

Maybe it actually is in the interests of <some-group> that Intel runs off on tangents. How would we establish that? Well, we'd have to have some understanding of what the purpose of the corporation is. I don't claim to have any special insight into that, I'm just pointing out that claiming that its purpose is maximising shareholder revenue is historically revisionist and inaccurate.


> Societies do this because it is widely believed that it would be impossible for various things to happen without it.

It is not society's business to allow or disallow different kinds of contracts between private parties. If I know that corporation's investors don't have personal liability and still decide to go into business with them, it's my personal decision to accept this risk and not society's. If I so please, I could go into even riskier deals, up to just gifting them my money. It could be a stupid decision, but it's my decision to make, not something that "society" can allow me to do – actually, not society, but it's worst enemy, the government.

Of course, some governments act out of assumptions that you stated, but it's nothing more than just justification for tiranny.


You misunderstand the reach of limited liability. It caps liability not merely w.r.t. consumers/vendors/employees etc, but to anyone whatsoever, even a person who has no relationship with a corporation and did not enter into a contract with it.


You don't seem to understand the history of corporations at all.

They don't exist and were not created to sell potentially risky products to eager consumers. They were created to engage in capital-requiring efforts (notably expeditions, but also large scale resource extraction and industrial projects) that would not happen without the personal liability release.

They would not happen because the potential for liability related to the endeavor was clear to all (i.e. people would likely die, and vast amounts of money might be lost).

But societies (and notably royal families) wanted those things to happen, and so they create corporate charters to encourage them to do so.

We could make do without corporate charters, but then we'd also likely be making do without most of things they enabled, because essentially nobody would have had the guts to bet their own personal liability on what were clearly risky and dangerous endeavors.


I know history of corporations pretty well, from hundred years war and british and dutch maritime corporation's, thank you. But history is not ethics: what is and what ought to be are two very different things, not to be confused. Explaining how and why something hapenned historically has only tangential relation to how it should happen and how should we view it from an ethical standpoint, and I was talking about the latter, not the former.


So you're in the camp of "we'll do without silicon fabrication plants entirely, thank you very much", because I can pretty much guarantee you that the liability that creating the initial versions of this technology would have ensured insufficient capital to proceed.

As another comments notes, corporate limited liability is not just, or even primarily, about liability towards your business partners. Build a factory, pollute the water, and without limited liability, you are ruined (which some would argue is the way things ought to be).


> The purpose of a corporation... to maximize returns for its investors.

That's a common misunderstanding.

"In nearly all legal jurisdictions, disinterested and informed directors have the discretion to act in what they believe to be the interest of the business corporate entity, even if this differs from maximizing profits for present shareholders. Usually maximizing shareholder value is not a legal obligation, but the product of the pressure that activist shareholders, stock-based compensation schemes and financial markets impose on corporate directors." [1]

[1] https://www.lawschool.cornell.edu/academics/clarke_business_...


I was really hoping to avoid going off on this tangent. To be more precise, I was comparing existing-in-perpetuity vs maximizing-investor-returns, and the latter is a more important goal. Whether corporations should prioritize morality and employee/customer interests, is a whole other discussion that is best debated elsewhere.


Lmao you're on this again. Stockholders control the company. Their goal and "purpose of a corporation" are synonyms, even if legally companies aren't forced to maximize profits. In fact, the legal requirements have nothing to do with the purpose of the company.


> This is a controversial opinion, but I don't agree with the idea that big corporations should make an entry into every single adjacent market. If the company has core expertise and advantages that can deliver a decisive win in a known-lucrative market, then it's a no brainer. But if that's not the case, why run around chasing shadows everywhere, instead of focusing on harvesting the markets where you're already winning.

The risk is that a competitor comes out of these adjacent markets and edges you out. That's what's looking likely to happen with ARM, which is slowly moving from embedded / cellphone to laptops and servers.


No company operates this way. Companies are full of people optimizing for their own fortunes, and the company going under is not and will never be a part of those plans.


Many creative productions take on a similar structure to what he proposes. Many entities, some individuals, some partnerships, and some corporations, come together to produce a single work. The associations dissolve when the work is finished.

Films, albums, festivals, etc.


Hmmm. I do not agree because this is very much an unless they succeed kind of parable and then you can spin it another way. Apple is now a chip company? I guess avoid apple because them re-inventing themselves all the time is your worst nightmare as an investor.


“The Innovator’s Dilemma” has a rather terrible track record as a handbook for tech corporations.

Nokia’s leadership was also big fans of this book in 2000-2010. They saw themselves as the nimble disrupters, eventually eating away all computing markets with their cheap plastic phones and a smartphone OS derived from embedded systems. If Nokia would just keep adding features, the Christensen theory promised that they’ll one day own the upscale markets too, like in all those case studies in the book.

Instead Nokia’s business was destroyed by actual innovation: Apple combined a new kind of mobile UI with an OS derived from high-end PCs. It didn’t fit on the ladders of innovation shown in the book. It wasn’t a cheaper thing becoming good enough, but an expensive thing that was nearly impossible to manufacture initially. And that’s why Nokia thought they could safely ignore it, until it was too late.


It's the opposite angle of attack, coming aggressively down from the high end. Same thing Tesla is doing.


This is a very interesting take. In Nokia's defense the incumbent leader of the PC market was Microsoft Windows and their phone efforts were not successful.


In general mobile chip margins aren't great unless you are apple. That is why you don't see too many competitors in the mobile chip market. You have Qualcomm, but they have a Radio monopoly and their performance isn't that great. Intel architecture isn't even that bad.

Intel used to have a manufacturing advantage which they longer do. That is where they are trailing: Material science and manufacturing. This stuff cost a lot of money and requires and lot of Research. This is the Kind of stuff MBA types don't necessarily want to invest in.


There are multiple 4g, and 5g radio vendors, just most people never heard of them so obscure they are.


They are probably all paying QCOM they own a lot of the radio patents.


Huawei must be known!


That chart is not of competing technologies, but of market demand and one technology. The classic disruptive chart shows two technologies improving in parallel, but the performance customers demand doesn't increase as fast, so eventually the disruptive technology is "good enough" for those customers (even though the incumbent technology is better still: it "overshoots") and has other desired qualities (like battery efficiency), so customers all switch to it.

Here, of course, Apple is unlikely to sell its chips to other manufacturers... so any disruption will be at the product level: affecting wintel laptops, desktops... and eventually perhaps game consoles and linux servers. Everything.

BTW I really enjoy his theories, but let us note that Christensen himself assessed the iphone as non-disruptive.


> let us note that Christensen himself assessed the iphone as non-disruptive.

Which I suppose it is, but I think it's important to ask: if Apple didn't introduce a disruptive technology, why were they able to do what Nokia didn't?

Its hard to deny how the iPhone represented a power shift away from carriers. It's easy to forget but a decade ago you had to pay carriers for the privilege of changing your ringtone. They had their fingers in every pie, in exchange for access to the market the carriers more or less controlled. Hell, you often couldn't even bring your phone if you switched carriers.

Nokia was too busy pleasing their customers: the big cellular providers who ordered most of their phones. And the status quo heavily favored them -- the top selling phone of all time is still a Nokia phone from 2004. They didn't really have any idea how to market direct to consumer and their brand had no particular meaning -- they made as many SKUs as carriers wanted. Since they were in a clearly dominant position, they had no need to arm twist the weakest player to break into the market, and a lot to lose if they played hardball.

Clay's right that it's not a traditional "cheap tech finds a new purpose undermining more expensive status quo" but it definitely rhymes -- all the incentives are the same and adequately explains why Nokia fell apart.


Allworth on Exponent and Ben Thompson (his co-host on Exponent) have both explained that Clay was in fact wrong about iPhone being non-disruptive because he didn't understand the ultimate value of the iPhone, and got stuck in the hardware<>os as a single point of integration.


anyone wondering, I guess this is the "classic chart": https://hbr.org/resources/images/article_assets/2015/11/R151...


> let us note that Christensen himself assessed the iphone as non-disruptive

Indeed the iPhone can be seen as opening a completely new market, instead of disrupting an existing technology. Laptops still exist, desktops still exist, servers still exist. iPhone expanded the technologies available, did not replace any existing one.


> Here, of course, Apple is unlikely to sell its chips to other manufacturers...

How do you gather? I think it'd make sense for Apple to earn more money by acquiring Intel's customers.


Firstly, it's not Apple's way. They favour vertical integration and premium-pricing.

Secondly, why have part of the pie when you can have the whole pie? i.e. instead of just acquiring Intel's customers, also acquire their customers, all the way to the end-consumer, gobbling up the margins at each step of the way.


> Secondly, why have part of the pie when you can have the whole pie?

This entails converting all Windows user into macOS users. This is not realistic, in my opinion.

In the end, the question should be what makes the most money: keeping Apple silicon exclusive to Apple (and thus selling more Apple devices) or selling Apple CPUs to competitors (and thus adding a new income stream).

I think option number two would make Apple more money, and therefore shareholders should prefer this approach.


I am not sure what Intel can do to come back to a strong position.

Afaik, all of Nvidia, Apple and Qualcomm (dunno about AMD) pay significantly higher wages than Intel.

If Intel is starting off on the back foot and cannot get top talent, then their decline is all but guaranteed.


Reminds me of when I went to visit Intel a few years ago. I was shocked at the bare bones amenities and that employees had to pay for everything (snacks/drinks/etc.). That might sound insignificant but in the talent war in SV that stuff mattered a lot back then.

It was a stark contrast from nearly every other fast-growing tech company in the area.


Ah, Intel. Acre-sized floors of tiny cubicles. Everything is grey. Really.[1]

I visited Intel HQ once, and it really did look like that. Is it still like that?

[1] https://youtu.be/gXReifFHXbY?t=44


Actually that was back when they were nice cubicles. Sure the colors weren't great, but the cubes were about five feet high and relatively big.

They blocked out most of the sound and very few visual distractions when you were sitting down. They worked well if you wanted relative peace and quiet to concentrate.

Since then they have switched to open-concept with cubes about half the size and walls that are maybe four feet high. So now people have plenty of visual distractions and a lot more noise. Didn't help they added sit-stand desks so you will have people who get on calls while standing and can be heard 30+ feet away.


In the comments:

"I worked there when this was filmed. Immediately following the airing there was so much shame that they launched a whole office redesign and open office concept. Nothing like some well deserved ribbing to shake things up!!!"

"The "gray on gray" quip led to an attempt to add color to some office buildings. The open office redesign looked much better than the 80s and 90s cubicles, at least the one I saw in Santa Clara. Much more current, modern aesthetic, without walls. I left the company a couple of years after that so not sure how well it took hold. Friends who stayed said it didn't work out too well but it's been years since then (Conan did this bit in 2007)."


I was hoping to hear in the background "corporate accounts payable, Nina speaking... just a moment..."


No. It's all about open space now.


Hardware companies, and even some legacy software companies, have very different cultures than recent software companies. The 'recent' upstarts like Google, Facebook, Twitter, etc are either powered by massive profit engines or endless VC/investment money. Those places can set standards that aren't achievable by older companies that have lower margins and need to actually show profits.

There is also a cultural difference between hardware engineers, who are usually more 'serious engineers' who actually engineer things vs software engineers who are often less rigorous in their architecture and design. HW folks deal with long lead times and can't usually fix errors after production, so that creates a vastly different culture.


Hardware engineers may think they are more serious, but how many hardware engineers take the time to verify the numerics of the finite element, finite difference, or spectral method codes they rely upon for computational electromagnetics, fluid dynamics, quantum or classical mechanics modeling that they often are fully reliant upon to do their jobs. As someone who has spent a career writing tools for internal teams at hardware shops, or at software companies that cater to hardware engineers, I have seen far more rigor in my technical computing software colleagues than I have seen from my hardware engineering colleagues. Most of my hardware engineering colleagues have had no idea what version control is, don’t know or care how floating point calculations can wreak havoc on their models, and can’t craft anything more complex than a straight line Matlab script. Software written by hardware engineers is often atrociously bad, has no tests, and can only run on the machine where it was written. I know I can’t even consider ever asking most of my hardware design engineering colleagues to touch a C compiler or validate the numerics of their models. They just as often prefer to rely upon their magical point and click GUIs or back of the envelope spherical cow based scripts and spreadsheets.


> employees had to pay for everything (snacks/drinks/etc.).

That’s not entirely true, at least in Hillsboro (I interned there recently). Drinks were free, and so was fruit, but you had to pay for e.g. donuts or chips.


The fruit's hilarious because if you're a contractor the fruit's not free. It's forbidden fruit.


Ah yeah, I forgot to mention that part. Contractors have to pay for a weekly subscription to the drinks and fruit.


Which is not a bad policy. I'm not sure I would want to work somewhere that offered free junk food.


I wonder if they still have the sign-in sheet at the entrance where you have to write down your name if you show up after 8am. I bet that didn't attract or retain a lot of top-quality talent.


I'm on the opposite side on this

I don't need snacks and drinks at work, just water and the occasional coffe, which I get at the bar anyway so I can have a little walk and clear up my mind

Workplaces that looks like a teenage dream party give me the chill

When I need to relax I stop working entirely and go out

I consider more money or more free time (or both) an incentive

I appreciate solid forniture, a comfortable seat, big windows with natural light, a nice neighborhood (last work I accepted was because they had an office in Duomo in Milan, the one I have now has an office 5 minutes away from the Colosseum in Rome), I don't care much about "free stuff"

Paying for my drinks or snacks it's not what will bankrupt me


Snacks and other tchotchkes have become de rigueur in SV: I haven't had to buy my own lunch or snack for 8+ years now.

With the pandemic in force and the prospect of going into an office itself coming into question, companies will have to stop relying on the (lazy but admittedly effective) lure of free snacks and lunch. Good times.

On the whole, I agree with you: when I see people crumpled up in bean bags and picking at salad on picnic tables in companies' 'Come work with us' webpages, I get the chills. How very juvenile! But we all know that the industry feeds on a steady influx of juveniles and there's a new batch every summer!


To be fair with my employers, they usually paid for my lunches or had a cafeteria, usually not free, but very affordable.

It's good to have the option, it's not what drives my decisions.

I have no problem with people that see it as a bonus, to each his own, it simply doesn't matter much to me.

But maybe SV corporate culture has gone a little bit too far on it.


A lot of the old-school employers run subsidised cafeterias. Cisco used to, back in the day. Pretty decent choices and reasonable pricing, until they started whittling down options making the cafes pretty unattractive compared to surrounding eateries in Milpitas (this was during the '09 recession). I know that Apple still runs them but they are of course notoriously thrifty (I'm pretty sure they don't subsidise as heavily as others). Same with Yahoo. Not sure if we had to pay, but it was pretty reasonable. Soon after that first recession, I started seeing a trend of catered food at smaller places, and completely in-house operations at the larger places. That trend has kept up until the pandemic struck.


I mean, sure, paying for your drinks or snacks won't bankrupt you -- but it's not gonna bankrupt the company, either, and in today's tech job market a row of vending machines outside the company cafeteria can feel a little nickel-and-dime.


Vending machines have been ubiquitous since my first tech job in the 90s.

I am from Italy so they even looked a big novelty back then.

I'm not blaming people who appreciate it, I consider my job my professional endeavour and as I pretend the best from myself, I pretend it from my employer.

Which means that they have to provide me the tools and the means to do it at my best.

I used to be a smoker, I wouldn't have considered it a bonus if my employer paid for my cigarettes.

My salary has to be good enough to pay for that.


100% agreed

companies throwing away chocolates and parties are wasting resources that could go to salaries, re-investment or profit

having said that a company as large as Intel and with such a dominant position for many years has a lot of margin to waste money and make mistakes, which is why they often become complacent


It won't bankrupt you, but it's the company making a statement on how much they value their engineers. And it may provide motivation to put in extra effort.


Intel is losing to TSMC most of all, in the manufacturing front.

Intel needs to reach 10nm and 7nm, and they needed to do that 5 years ago.


Sadly, it’s looking like Intel won’t do 5nm until 2023 at which point TSMC is projecting 3nm. They won’t be able to catch up.

On the other hand Intel is a huge national security interest for the US, so expect the government to help them out where possible.


First (and I'm no expert here) I don't believe that (x)nm is the be all end all of cpu performance. Architecture still matters.

Second, perhaps the AMD/Apple fan boys have not noticed but Intel has many times the volume of AMD and whatever ARM volume Apple will have (Apple is typically 10% of lap/desktop sales, up this year from Covid). The world requires Intel to meet its computing needs and no magic wand will give the other guys that capacity anytime soon.

It is way to early to count Intel out.


> First (and I'm no expert here) I don't believe that (x)nm is the be all end all of cpu performance. Architecture still matters.

The same architecture on a smaller process will use less power and use less silicon (usually making it cheaper: but yields make that equation complicated). A 100mm^2 design using 50W will only be 50mm^2 using only 25W on a smaller node.

Since 300mm wafers are always 70,000 mm^2 in area, you now produce twice as many designs in 50mm^2 instead of 100mm^2.

-------

Now what made Intel get stuck on 14nm+++ is that Intel got really good at increasing 14nm++++'s clock rate. Dennard scaling is dead: a larger process may offer better GHz these days.

But if you can solve the yield issues, the march towards tinier and tinier transistors is a winner. Less power used, less chip-area used, and therefore cheaper and more-efficient chips.


If you can get it there is no reason not to buy an AMD product in a laptop, or to design one in, unless you absolutely need thunderbolt support.

And laptops sells way more than desktops do. This is a winner takes all market and there is no reason to go with the loser.


Intel Ice/Tiger Lake Platform has still advantages for idle or light use (not heavy computing) power consumption on mobile thanks to their continuous improvement.


> On the other hand Intel is a huge national security interest for the US

This. I'm actually surprised they have grown so uncompetitive considering the critical products their CPUs are presumably used in. But Intel has been a money making machine for many years, I don't think most people realize the amount of resources they've accumulated when they compare them to AMD et al. I just hope they don't make more lousy investments and actually focus on sorting out their core business problems.


Yep. Intel aims to pay in the 50th percentile for jobs. The problem with that is that many FAANG like companies can literally pay the top employees twice or more to come work for them.

A lot of the top-talent has left Intel and I think that will continue. Unfortunately for Intel most of the mediocre employees have stayed.


Is this really true, though? There are different types of job functions involved in designing and producing a new generation of a processor. Perhaps they have a tiered system wherein they will retain key talent by compensating it right?

Second, if you are a PhD in this area (a hybrid of Computer Engineering and Computer Science, and I believe that a PhD in this area is the minimum qualification for this type of work) then you do not have a large pool of employers to choose from in the first place, so any employer that's competing for folks from this group will likely end up with a decent number of talented people anyway.


What's the real reason Apple can design a higher performance device? Better fab process? ARM ISA is fundamentally faster?

Graph conveniently omits AMD who now beat Intel using the same ISA but different process.


I'm still waiting for my M1 to show up but from studying the details that other people have commented on, it's clear that Apple is slathering on the microarchitectural resources that both Intel and AMD are stingy with. Their core has a gigantic L1 instruction cache, rather a lot of L1 TLB entries (4x more than Intel), large(r) pages by default, and many other aspects. L1 TLB entries in particular are a well-known problem for x86 performance so it's a real mystery why Intel doesn't simply add more of them. x86 is probably married to 4K pages forever, because of god-damned DOS, whereas Apple can make the default page size be whatever they want it to be. People with high requirements can jump through a million hoops to get their x86 program loaded into hugepages, but on Apple's platform you're getting it for free.


While Intel is indeed losing the performance game, even with the shady management engine shenanigans intel CPUs are still much more open than the locked down proprietary nightmare that apparently are the new Apple ARM SoCs.


That's why I commute to the office via horse — so I don't need to deal with those fascists at the DMV. Sure the costs of boarding and hay and horseshoes pile up, but at least I'm not being hassled by The Man.


This is a weirdly reductive take. Any new technology can be better in some aspects and worse in others. (To use your car/horse analogy: cars are much faster and have increased standards of living, but they have also had large negative effects on society and the environment.)

And it's a poor analogy because driver licensing is almost a straight consequence of how dangerous cars are, but the openness of an SoC does not directly follow from how fast it is.


And the PC has been a massive mistake that needs rectifying?


From the point of view of IBM, most likely yes, otherwise there wouldn't have been PS/2 and MCA architecture.

Hence why laptops and 2-1 hybrids are basically back to the days of vertical integration of 16 bit computers, whose the PC was the only exception.


So let's all make sure, for apples sake, that is fixed then.

Not really, PC laptops and hybrids are pretty standardized. A slim case doesn't really make it vertical integration.


Try to use anything other than Windows or ChromeOS on such laptops, with 100% support of underlying hardware.


Many of us are? And we can replace parts from third parties.

It is getting more and more integrated though which makes it hard to replace parts. But that has more to do with greed and size requirements than vertical integration, a soldering iron can amend some of that. Driver situation isn't by any means something new or necessarily an indication of vertical integration either.


Looking at my Asus sold with Linux "support" I doubt it, hence why I explicility stated 100%.

As for the rest, nothing that regular consumers would ever bother with.


I don't have any issues, but I did do my research.

We are talking about vertical integration are we not? Don't see how any of that is relevant then.


Which is exactly the proof of how Linux fails at that, by not proving it, thanks for confirming my assertion.

And since you did your research the wider public would appreciate to learn in what consumer shop one can get such out of the box experience.


I don't even know what you are arguing about anymore.


Since the DMV exists primarily for traffic safety, I assume Apple is removing your ability to run whatever you want on your processor for your safety too?


LOL, I almost thought you were serious for a moment.

The DMV is a revenue source for most states. Same with traffic cops. They both only have a tenuous link to traffic safety.


I was serious. I don't want to actually get into the usefulness of the DMV, I was just assuming a common opinion for the purposes of the discussion since the OP made the comparison to the DMV.


I think you will find that The Man is especially going to be hassling you with your horse.


Yeah, just like they hassle me with my wire-wrapped homebrew OpenBSD-based SDR mobile phone in my blue Radio Shack project box every time I try to get on an airplane. Why does The Man always pick on me?!


No private copter yet? Look at Elon. He flies to work with his Rocketman suite and disturbs the LAX flight controllers and pilots all the way. The Man


Yeah this point is being continually missed - having better performance doesn't mean much if you are limited in what you can do by the OS, the ecosystem and the hardware. As it is you can't run another OS or attach external or PCIe GPUs to anything Apple Silicon. So if macOS continues the downward trajectory or if someone else comes up with a better GPU you're stuck unlike with a PC.


I think it is typical of Apple to have feature poor early entrants and then increase feature set with new generations. I am sure in the next few years the GPU/other pcie issues will be resolved.


I'd bet money Apple and Nvidia remain incompatible for the foreseeable future.


So? Consider 3 aspects:

1. Power efficiency (read: battery life) 2. Performance 3. Openness

99% of consumers only care about 2 of those. The more that time goes on, the more it becomes clear that prioritizing openness, in particular for hardware, is a direct trade-off against building a great consumer product.


I think this is simply a consequence of the fact that the technological overlords who wish to oversee all of computing had not had a chance to overly lock down computers just yet (though they certainly wish to do it). Once they inevitably overplay their hand and the average consumer truly loses the ability to run the things he wishes to, you'll start to see a dramatic increase in the consumer's desire for openness.


Apple’s core is very wide. But wide usually means very high power consumption.

M1 apparently has much lower power consumption.

So something else is going on other than “Apple is willing to spend more to make cores wider”.


That, in part, comes from x86 vs arm. x86, with its variable length instructions, means your logic gets increasingly complex as you widen. Meanwhile with arm, you have constant instruction size (other than thumb, but that’s more straightforward than variable size), that lets Apple do things like an 8-bit wide decode block (vs. 4-bit for Intel) and a humongous 630 OOB (almost double Intel’s deepest cores).


I think this marks the definitive point where RISC dominates CISC. Been a long time coming but M1 spells it out in bold. Sure variable size instructions are great when cache is limited and page faults are expensive. But with clock speeds and node sizes ansymptoting, the only way to scale is horizontal. More chiplets, more cores, bigger caches, bringing DRAM closer to cache. Minimizing losses from cache flushes by more threads, and less context-switching.

Basically computers that start looking and acting more like clusters. Smarter memory and caching, zero-copy/fast-copy-on-mutate IPC as first-class citizens. More system primitives like io_uring to facilitate i/o.

The popularity of modern languages that make concurrency easy means more leveraging of all those cores.


Agree with everything except the RISC vs CISC bit. Modern Arm isn't really RISC and x86 gets translated into RISC like microinstructions. To the extent that ARMv8 has an advantage I think it's due to being a nearly clean design 10 years ago rather than carrying the x86 legacy.


AnandTech's analysis corroborates what DeRock is saying here about the x86 variable length instructions being the limiting factor on how wide the decode stage can be.

The other factor is Apple is using a "better" process node (TSMC 5nm). I put it in quotes because Intel's 10nm and upcoming nodes may be competitive, but Intel's 14nm is what Apple gets to compete against today, right now.

Intel has been defeated in detail.


> but Intel's 14nm is what Apple gets to compete against today, right now.

Intel's 10nm node is out, I'm typing this on one right now. It's competitive in single-core performance against what we've seen from the M1. Graphics and multi-core it gets beat though...

Or do you mean what Apple used to use? (edit: the following is incorrect) It's true Apple never used an Intel 10nm part.

EDIT: I was wrong! Apple has used an Intel 10nm part. Thanks for the correction!


I'm using a MacBook Pro with an Intel 10nm part in it. The 4 port 13" MBP still comes with one. I think the MacBook Air might have had a 10nm part before it went ARM, too.

There are still no 10nm parts for the desktop or the high-end/high-TDP laptops anyway afaik.


Tiger Lake is objectively the fastest thing in any laptop you can buy, regardless of whether its TDP is as high as others. You're right about the lack of desktop parts, though.


It's not faster than the M1, is it?


Maybe but I won’t give it credit for existing until it lands in public hands. You can buy tiger lake laptops off literal shelves, today.


Are you sure about the single core performance - Geekbench has the M1 far ahead against 10nm laptops?


I'm typing this on a Tiger Lake, too, and it is very fast and draws modest power. But, are they making any money on it? If they lose Apple as a laptop customer how much does that hurt their business.


There was a lot of discussion on a related, recent post about Apple buying many of Intel’s most desirable chips. Will be interesting to see whether the loss of a high-end customer translates into more woes for Intel.


Maybe that will just make PCs better now that Apple isn't buying out the best chips.


> Meanwhile with arm, you have constant instruction size (other than thumb, but that’s more straightforward than variable size),

That's only for 32-bit ARM; for 64-bit ARM, the instruction size is always constant (there's no Thumb/Thumb2/ThumbEE/etc). It won't surprise me at all if Apple's new ARM processor is 64-bit only (being 64-bit only is allowed for ARM, unlike x86), which means that not only the decoder does not have to worry about the 2-byte instructions from Thumb, but also the decoder does not have to worry about the older 32-bit ARM instructions (including the quirky ones like LDM/STM).

That would also explains why Apple can have wide decoders while their competitors can't: these competitors want to keep compatibility with 32-bit ARM, while Apple doesn't care.


For the uneducated, what is an "OOB" here?


I think that's a typo - the Apple Silicon M1 has a "ROB is in the 630 instruction range" - https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...

"ROB" and "OOO" might have gotten mixed together here.

OOO = Out-of-order. Refers to the fact that the M1 can decode instructions in parallel.

ROB = Re-Order Buffer. Refers to the stage where the parallel instructions get put back "in-order" and "retired."


From the Anandtech article[1] on the M1, I think this is referring to the reorder buffer, or ROB. Reorder buffers allow instructions to execute out-of-order, so maybe that's where the "OO" comes from.

[1] https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


Yes, I was going by memory, which thought “out of order buffer” was a thing. I meant ROB. Point still stands, in that scaling these blocks is very difficult with variable instruction size.


Not a chip designer, but I think it could lead to a more efficient use of the functional units.

With more instructions decoded per clock and a larger reorder buffer, the core should be able to keep its units busier, i.e. not wasting energy without producing valuable output.

This efficiency gain of course needs to outweigh the consumption of the additional decoders. This part is easier with ARM as decoding x86 is complicated.

In addition to higher unit utilization, the increased parallelism should also be an advantage in the "race to sleep" power management strategy.


> M1 apparently has much lower power consumption.

Even compared to similar manufacturing processes ?


The manufacturing process is TSMC's, so probably not.

To me the innovation is using phone-type SoC technology.

A lot of the power (and die area) demand in modern chips is in the interface circuitry to drive the few centimetres of PCB trace between ICs.

Put the RAM in the same package as the CPU and you eliminate that interface. That gives you die space and power budget for big buffers and lots of parallel execution.

Interface transistors (and wires) are many times larger than the core compute logic transistors (and wires), so there is much more than a one-for-one gain.


There are no similar manufacturing processes, Apple has paid for time-limited exclusivity on 5nm.


Interesting that you mention page size, IMHO 4k was small even 10 years ago. Though of course the next bump is 4M which I think might be too big for most apps

Apparently ARM supports 4KB, 64KB and 1MB (I just googled that)

Also maybe all the x86 crud might be finally taking its toll (and not just the ISA but all the cruft that got on top of it for 40yrs+ like legacy buses and functionalities, ACPI, or even the overengineered mess that's UEFI)


All the legacy junk that's actually on the CPU is tiny. Even the x86 instruction decode you could lose in a corner of a 512x512 FMA unit.


16KB page size is what arm64 macOS and iOS on Apple A9 onwards use.


There’s nothing wrong with ACPI - in fact it’s what’s going to enable ARM in the server space. Without something like ACPI, you need customized kernels and other crap to control system devices. ACPI is actually quite nice. And it replaced something (APM) that really was legacy cruft (dropping into real mode for power management?!! Ugh!)

EFI doesn’t seem that bad either. Maybe something like uboot would suit you better? But the reason we have those nice graphical bios utilities now is because of EFI.


It seems like a straightforward tradeoff of IPC vs scalability.

They are able to shovel transistors at the core because there are only 4 of them. AMD on the other hand is packing 64 cores into a package, so each core has to make do with less.


I believe my point is that the M1 seems to spend its gate budget where it really helps performance of software in practice, while x86 vendors are spending it somewhere else.

AMD in particular wasted an entire generation of Zen by having too few BTB entries. Zen1 performance on real software (i.e. not SPEC) was tragic, similar to Barcelona on a per-clock comparison. They learned this lesson and dramatically increased BTB entries on Zen2 and again for Zen3. But the question in my mind is why would you save a few pennies per million units by shaving off half the BTB entries? Doesn't make sense. They must have been guided by the wrong benchmark software.


I doubt it was about saving their pennies. Zen1 EPYC was a huge package with an expensive assembly & copious silicon area. But it was spread across 32 cores. A larger BTB probably had to come at the expense of something else.

What 'real software' are you thinking of? Anything in particular? Just curious, not looking to argue.

(sorry I changed my comment around the same time you replied)


Very large, branchy programs with very short basic blocks. This describes all the major workloads at Google, according to their paper at https://research.google/pubs/pub48320/


Is page size actually a noticeable problem for most people (say, MacBook users)? Linux has supported huge pages for years, and my impression is that it's still considered a niche feature for squeezing out the last percentage of performance. I don't think any popular distro enables it by default.


If you have a large program putting its text on huge pages can make much more than one percent difference. Any time your program spends on iTLB misses can be eliminated with larger pages.


Not just Linux, but windows also has Large Pages support. Back in the day where CPU cryptocurrency mining was still a thing (and for some cryptocurrencies which are aimed towards CPU-mining still is), some mining programs would ask to enable large page support.


Honestly? A lot of it has to do with the fact that they booked TSMC's entire 5nm production capability for some time.

So they are working on a smaller process node than their competitors, and they used their market position to ensure that they will be the only ones using it for a little while. That gives them a significant leg up.


Yep, like when Apple got an exclusivity deal with Toshiba on its tiny-sized harddrives for use in music players.


Intel dropped the ball on 10nm and 7nm processes constantly for the last 5 years.

Skylake was the "tick", then they couldn't "Tock" to 10nm, so they made 14nm+, then 14nm++, then 14nm+++... Intel's been stuck on the same process for half a decade now.


This doesn’t come up enough: intel’s ceo had to abruptly resign in June 2018 due to a relationship with an employee. [1]

That could not have come at a worse time for a company like intel.

[1] https://www.reuters.com/article/us-intel-ceo-idUSKBN1JH1VW


That could have been a good thing for Intel. The fish rots from the top for a couple of years already.


Not for 2020-2025 intel. For reference, Satya Nadella became Microsoft's CEO in 2013.


> Intel's been stuck on the same process for half a decade now.

Even worse - the same architecture as well, which is why they have to backport their 10nm Ice Lake architecture to 14nm to even get anywhere.


Part of it is just that Apple spent a bunch of time hiring the right people and making the right acquisitions.

Another part of it is just that designing a CPU with good single threaded performance is hard and costs a lot of die area and no one else in the ARM world was incentivized to do this - who needs a smartphone with that kind of performance (I know, I know - all of us, now)? But Apple also makes IPads which are kind of a bridge device - mobile and battery-powered, but also big and something people might want to do real work on.

I’m also going to say that I think Apple is still alone among the phone manufacturers in recognizing that smartphones are software platforms, not simply hardware devices. I think they have a special appreciation for the enabling power of a more capable CPU.


Look at these graphs: https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...

The 5 Watt iPhone CPU almost beats the 45 Watt AMD CPU. And the M1 will then simply obliterate the x86 incumbents.

Why? The real reason is that Apple had/has the real advantage. It's size, resources, money, strategy, know-how.

It's bigger than Intel. It has a better corporate culture than Intel, it has a vertically integrated market completely safe from any outside disruption for years, and will be for years. It had the opportunity to do this switch, so it had the motivation to develop this capability.

Apple was able to hire the best semi designers plus benefit from the ARM platform.

AMD is still too small and relatively spread thin - trying to cover multiple segments of the general market - compared to Apple, which basically has this one chip in two versions (mobile and ultra mobile).


Well ok but by that theory Dec Alpha or HP PA should have dominated?


Neither had the underlying single enormous vertically integrated market slice.

Apple has and had the iPhone. Apple started using their own chips, the A5, about 10 years ago. (In 2011 March, starting with the iPad 2. And they used ARM before that too.) And since then Apple optimized the "supply chain", incremental steps, but when you are Apple every step can be almost revolutionary (and year after year Steve told exactly that to the believers, and it's basically true, just not on actual feature front, but on hardware and systems level - eg. Gorilla glass for the phones, power efficiency, the oversized battery for the MBP, milling, machining, marketing, design, and so on).


Both examples were chips that were basically bought by Intel. The Alpha was a monster, and the PA ended up being replaced by the Itanium.


They have access to a fab process that is ~2 full generations more advanced than what Intel is shipping most their products in, and 64-bit ARM has one advantage over x86 (fixed-width instructions) which allow them to make a very wide core.

Of those two advantages, the process one is a lot more significant. Their chip loses in single-threaded performance to AMD's newest, which are still one node behind Apple, but appears to be more power-efficient. I fully expect that once AMD gets on the matching node they will take the crown back, even in a laptop power envelope.


>What's the real reason Apple can design a higher performance device?

Everyone else with the immediate capability are professional ball-droppers.


Part of it is that they're essentially using all of TSMC's 5nm capacity, right?


My guess until reviewers get their hands on the thing is that they've got some pretty good RnD alongside the massive process node advantage over Intel. AMD has already shown that there's a lot more performance in 7nm than we thought and the M1 looks like it'll be on-par with what you'd expect from zen 4, so I wouldn't say they've got any advantage from the architecture (as you'd expect)


Right, but zen 4 is a year away. If Apple is a year ahead then that’s a lot (even if it is because they are able to use 5nm node fab capacity first)


Apple has a greater incentive to optimize their chips for power performance because of their smaller devices (not just the phone, but the watch too).


Samsung has the same incentives. But their CPUs are far behind Apple.


Samsung chip division doesn't have guaranteed "buyer" or option to build SoC specifically for given product (even if usually you can bet on next generation Galaxy device as your target, you still have to design as if you were selling to open market - and that means less option to, for example, throw a lot of memory/cache resources that have been the main differentiator of Apple chips - which started from the same design as Exynos!)


which started from the same design as Exynos

Apple was actually the one designing those cores in the Exynos - up until and including the Exynos 4000. That was when Apple decided to sever ties with Samsung and stopped designing cores for them and Samsung had to go back to stock ARM designs with the Exynos 5000. That was why the 5000 series sucked so badly - Samsung lost access to those Apple/Intrinsity juiced cores. It’s also what prompted Samsung to open SARC in Austin.


I'm wrong about that - it was the Exynos 3110 that had the Apple designed core that also appeared in the A4. And after that, it was back to ARM's humble designs for Samsung.


Hummingbird in SGS2 (this was before rename to Exynos) and A4 in iPhone 4 stem from cooperation between Samsung and Apple (who previously used Samsung SoC in iPhone 3G and I think in 2G). The teams split before both phones reached maturity in design. From then on Samsung kept iterating on the Hummingbird design till I think quite recently - I'm not sure if my Note 10+ doesn't still have cores derived from that design.


> What's the real reason Apple can design a higher performance device?

Motivation to do so.

Intel had none.


I guess their sharp focus on phones and tablets led them to think... Hey, this is so powerful we might... And they indeed could.


My understanding is that there were a lots of bugs in the current processors. It was quality issues.


It's also the fact that they are putting everything on one chip. With an Intel chip it's just the CPU.


I don't think Intel is out of things. Apple's M1 chip on preliminary analysis looks like a huge game changer.... for Apple. They don't have much of a history of licensing their tech (except for a disastrous period of Mac clones waaay back) so I don't see the M1 disrupting the notebook/desktop market.

That leaves AMD. AMD Isn't eating Intel's lunch, they've had a pretty good taste of it and are about to say "no, not just lunch, we're taking all the food".

But there's a window. My not-very-expert opinion is that the window is 2 years. That's how long Intel has, while AMD chips away its market share with new installed systems, to turn things around. If they do it, they'll still have to claw back their previous position, or there will simply be a new stable equilibrium, but I think Intel's dominance in the server & consumer-grade space gives them some breathing room.

In the consumer space, shoppers for high end systems e.g., gaming etc are gone: AMD has them now, and more every day. They have me, that's for sure. But the much much larger consumer market is not in gaming. Intel has the name recognition for the average consumer, and it's their hardware in most consumer grade systems. A quick glance at a dozen consumer grade laptops & desktops from HP & Dell don't show any AMD CPUs in them.

So, Intel still has that level of inertia. If they can limit the bleeding in server & high end systems to a rapid bleed-out instead of a near instant & complete gushing torrent, that inertia may be enough to allow them to claw things back if they're able to execute within the next two years.

I'm no Intel fanboi: They lost me unless/until they turn things around. And from where they are now, I don't rate their chance more than 50/50, which might be generous. But I do think they've still got 2 years until their viability is really threatened. At that point, and I'm sure they've got some skunkworks & low-profile conversations going on about this already, they'll need to look at offloading high performance fabs to TSMC.

Personally, from a global strategy standpoint, I think the US might not want to lose what is really a strategic asset that way though, so we could also see some type of subsidy/bailout/whatever to prop them up.


Except the empirical data does not really back Clay's thesis in The Innovator’s Dilemma. Jill Lepore had an excellent analysis of the track record in a famous new yorker article (1) which caused pmarca to melt down on twitter. In his defense, I think the disruption thesis holds when the industry is fragmented. When a disrupter goes up against a monopoly/duopoly or oligopoly, it fails. Just a conjecture.

Even more generally, in business there are few hard and fast rules and even the ones that work lose their efficacy as the participants learn and adjust to them. Which is a good thing for the system as a whole. I would speculate that entrepreneurs should now look beyond disruption.

(1). http://www.newyorker.com/magazine/2014/06/23/the-disruption-...


Acidburn: RISC architecture is going to change everything.

Zerocool: Yeah, RISC is good.


No. Grove never understood disruption. Celeron was a strategic error. Otellini was right in foresight and wrong in hindsight on the iPhone. “Disrupt yourself” is a terrible strategy. That’s the entire point of “The Innovator’s Dilemma”.

Intel’s real disruptive strategy was the Pentium Pro architecture that disrupted the mainframe market, proven by ASCII Red in 1995. But you don’t hear about that because the business leaders didn’t even know it existed (same with multi core btw). The problem with this, as with all disruption, was low margins, so IA-64 was an attempt to enter the mainframe market as a peer of the high-margin encumbents, and was itself disrupted by the broadly inferior but cheaper AMD64. Low-margin ARM disruption also should have been expected as normal and unavoidable.

The big question is what market Intel can disrupt next. I have criticisms I’ve held back on, but one thing I will say is that Intel’s portolio of disruptive technology is greater than I had expected. Leadership has to be able to put together the pieces. NVidia, for example, probably has less core technology, but their direction is more aligned.


Long time coming. They simply didn't understand how s/w world works, never hired people who understood how customers were actually using their processors. It was/is a h/w company that had manufacturing advantage - but lost all their edge because they strayed far too away from end customers. They obsessed over their own capabilities and missed out on the world view.


Honestly I've got a few thousand invested in them on the currently unfounded bet that they'll weather this storm despite the current lack of any evidence.

I made a similar bet on SUN about 17 years ago and a smaller one on RIM (Blackberry). I'm getting a deja vu.

This all started when I bought a bunch of Apple stock around $2 and for the past 20 years I've been trying to make that play again. I guess I'll have to admit this position in GE is similarly worthless.

I gotta stop making these gambles of faith, Intel has ceded nearly everything and is defending their remaining kingdom poorly. Unless there's a major overhaul, things aren't looking good.

They've also been bleeding talent for years. I need to stop second guessing what I find obvious. Nvidia, Arm and AMD have already stormed the castle and have been pillaging the kingdom for years now. Once you see companies like TI and Marvell start to take significant pieces away then you'll know they're done for it.


The classic disruption story is clearly important here as is the pattern of recent failings by Intel. There have been other factors though that contribute to a more complex story:

- The availability of third party IP (Arm CPUs and originally Imagination GPUs) and manufacturing (Samsung and TSMC) that have allowed Apple to pick and mix to build its own SoCs.

- The fact that Arm has clearly been much more focused on power efficiency which might mean it has IP which Intel can't access (speculating here but we have big.LITTLE).

- Possibly the most importantly, the the benefits that Apple gets from being able own the whole stack and design SoCs that precisely match its needs.

As a result I'm a bit sceptical about whether Intel would have got Apple's mobile business long term given that market's eventual structure: the advantages of Apple's current approach are just too great.


>It was not until 2005 that Grove retired from Intel.

October 2007, CFO Andy Bryant Promoted to Operations Chief.

September 2009, CTO Pat Gelsinger left Intel.

2010, CEO Paul Otellini started work on Intel Custom Foundry.

July 2011, Andy Bryant was appointed Vice Chairman on Intel's board.

May 2012, Andy Bryant became Chairman of the Board.

May 2013, The Board appointed Brian Krzanich as new CEO.

And the more recent event you should all be fairly familiar.

Other than that, If I could sum up Intel's problem in one word, it would be Inertia.


Intel CEO on them losing the iPhone CPU deal:

> "We ended up not winning it or passing on it, depending on how you want to view it. And the world would have been a lot different if we'd done it"

No, it wouldn't. Maybe the iPhone would have been delayed 1-2 years. But they surely didn't have the capability to make this CPU properly, as all of their attempts in this field in the coming years demonstrated.


It doesn't look good for Intel but this isn't the first time, so we'll see.

But AMD is dominating them now on one front, and now a major customer of theirs makes their own chips which on the surface seem to be better.


AMD is also threatening Intel's business with the newest Ryzen chips. They're losing business on almost every front it seems.


I’m curious to see if Apple will bring anything similarly disruptive to the server market. So far servers haven’t been their strong suit, but a chip this attractive could change things.


That's one of the many bad effects of vertical integration success: if all the most profitable use cases are harvested by a vertically integrated dominator, very little development budget will be available for less closed hardware that can be integrated in niche use case solutions.


Even if they developed the server tech, it seems they’d be more likely to just use it internally to power their services as opposed to trying to reach a whole new market that historically they aren’t very good at working in.


It would seem really out of character for Apple to make a move in the server space.


https://en.m.wikipedia.org/wiki/Xserve

Apple’s Xserves were awesome. Not cheap. Only marginally competitive. But a joy to work with because of all the nice little details like a FireWire port on the front so you could boot it off of an iPod.


Yeah I remember Xserve! But if anything it seems like it was evidence that Apple wasn't really able to "get" the server market, where hardware is a commodity and price and compatibility are basically the only concern. Not to say that couldn't change, especially if their own processors had a big advantage in price/performance, but there hasn't been a whole lot of evidence that they have what it takes to succeed in this space.


While most of your points are true, I will quibble with this:

there hasn't been a whole lot of evidence that they have what it takes to succeed in this space.

Does it matter? In 2006 and 2007 there were legions of pundits, columnists, and "experts" who said the same thing about Apple making a telephone.

I wonder if there's a market for a server optimized for iOS builds. Cram as many Mx chips and possible into a rackable case with abundant cooling. Certainly it would be useful to Apple internally. But I wonder if there are enough iOS developers with the budget to make this workable.


I think the pundits in 2007 were obviously wrong to anyone who followed Apple. I mean, perhaps you wouldn't have predicted the level of success, but it was clear that the iPod was all the experience they needed to pull off the iPhone, and it was also clear to anyone who used phones that if they could get the internet to work well there was a big market.

In contrast, I would say there's very little evidence Apple can pull off a majorly successful server play. It's not impossible, and I think the trends are potentially in their favor, but it definitely doesn't play to their strengths. Apple's biggest strengths are very consumer oriented, and the trade-offs they typically make are not great for an enterprise crowd (expensive hardware, willingness to abandon legacy compatibility, etc.).

There's probably a smallish market for very specialized servers that Apple could fill well, and maybe they will. Long term maybe that could translate to a wider market if they really make an effort. But I think it's at the very least a significantly longer time horizon than the iPhone.


Sure as I said, anything is possible, and that can change, but to be fair the iPhone is a consumer device, which was firmly within Apple's wheelhouse already when they were developing the first model.

Do you really think there's a market for dedicated hardware for iOS build servers? I mean there is some market for this since CI vendors already do iOS builds, but it seems like a bit of a stretch to imagine there being enough demand for Apple to actually put the R&D and product development resources into making this a reality.


Both Xserve and A/UX were only relevant to the extent of Apple shops, nothing else.


> But a joy to work with

I think you are using the past tense. Those things are still in production in some places. I wish they weren’t.

Also, I’d like one as a chassis for a Pi or NUC cluster.


I've seen them on fleaBay pretty cheap. Sometimes by the palette.


We know they have tens of thousands of linux boxes running iCloud and the App Store and whatever else... if they can use their own ARM servers and reduce electricity use by 50%[1], you've gotta think they're looking serious at that, purely for themselves.

Not to mention buying their own hardware will obviously save them.

[1] https://twitter.com/ajassy/status/1318930486517927937?s=20


Purely for themselves, or also for entering the cloud infrastructure market. In this day and age if you are big enough (Apple surely is) and happen to have superior server hardware it wouldn't really make sense to sell that hardware, you'd rather keep it and rent it out.


An Apple alternative to AWS would be interesting, but it would probably be heavily geared toward iOS and Mac app backends, with little flexibility outside those use cases. The Xcode integration would probably be amazing, though.


It's hard to imagine Apple really getting the cloud market, or their product offering would look like. Cloud by necessity has to be pretty agnostic and flexible to meet the diverse needs of customers. Apple hasn't really demonstrated an ability to succeed with this type of product, and leans heavily toward very tailored, limited, "apple knows best" types of products.


The cloud is changing. At my gig we've embraced GCP's managed cloud services, and I do not want to go back to 2015. The operating cost savings (no devops people!) and the low cost makes engineering systems for cloud-agnosticism very expensive. If Apple provided a range of managed services that met clients' needs, they could find a lot of success. Would they beat AWS? Highly doubtful, but Apple already provides a lot of cloud data services to Apple developers and consumers as part of iCloud.

I'm hopeful with respect to Apple dogfooding its silicon in the data center because it may feed back into robust ongoing support for Unix/Posix on their publicly-facing platforms.


> I'm hopeful with respect to Apple dogfooding its silicon in the data center because it may feed back into robust ongoing support for Unix/Posix on their publicly-facing platforms.

I mean that's one possibility, but what seems much more "Apple-ey" (aka likely) is that they would release a very much locked-down platform of tightly controlled, managed services which would perform well and would have a beautiful dashboard design, but would offer a small subset of the functionality which is available from AWS or GCP, they would make some weird choices - like only allowing Swift for configuration files - and all of this would be very proprietary in nature and distant from their consumer products.


That's not what I'm getting at. Regardless of the details of the cloud offerings, migrating from Linux to Apple Silicon would help the MacOS and iOS developer experience, because it'll all (MacOS, iOS, Apple's rack OS) be using their BSD-derived Unix OS under the covers.

Right now Apple's cloud engineers are to some extent helping Linux be a better OS by filing bug reports, creating PRs, etc. That activity, directed toward an Apple Silicon rack OS, would redound to those of us sitting behind MacBooks, typing into the terminal, treating it as a Unix box with nice driver support.


The question is whether you can make a competitive cloud infrastructure service without x86. Imagine that would be a nonstarter for a lot of the market, at least now.


arm in the cloud is already a thing. A lot of companies have been moving towards server-less infrastructure in the past few years, and I can't imagine it makes any difference at all for a large set of use-cases whether you're executing your 50-line go lambdas on x86 or arm.


The vast majority of the potential market is running somewhere between some and all of their stuff on x86 stuff in the cloud. Of course someone could launch an ARM-only cloud. They'd either be severely limiting their potential market, or they'd have to convince prospects to migrate to ARM. Certainly not impossible, but is it competitive?


What migration? Most of the code I have run in the cloud in the past 5 years has been in the form of managed services. I don't really know or care if AWS is running my code on ARM or x86.


I think you are in the minority. EC2 is popular.


Moving their own servers to Apple Silicon would save a lot of money, buth when buying as well as operating the machines. But once they do, resurrecting the Xserve should be a further source of income and raise the chip volume (which drives down costs). Imagine the margin on an Xserve powered by Apple Silicon vs. having to buy Xeon chips.


> Not to mention buying their own hardware will obviously save them.

Maybe, maybe not. Apple doesn't currently produce servers, so it would probably take a lot of time and money for them to build out that capability to the point where it's break-even or better compared to just buying commodity hardware.


Or heck even just the general purpose chip space in general.

Apple's current ARM offerings are strictly locked to their own hardware, and there's been no hints that they may move into the ARM-PC space selling general purpose chips and letting other vendors produce board. I'd expect to see that first before an Apple silicon powered server.


Yeah Apple's never been a component company, they're all about vertical integration. Still it's hard to imagine if they actually do end up producing the best price/performance processors that these would be relegated to consumer products given the massive opportunity it would represent in the server space for cloud-based companies to bring down costs.


I don't see Apple selling their chips.


They don't have to. As long as Apple can provide the ARM boxes for developers, others can and will create the ARM chips for the server.


Nuvia is basically the server version.


How is Nuvia conncted to Apple?


All three of Nuvia's founders were ex-Apple silicon designers in some way or other (though not all immediately before founding Nuvia).


Same ISA (ARM), same philosophy, some of the same people.


Thank goodness. They haven't innovated in decades and x86 is a mess. I can't wait until RISC-V does the same to ARM. It's super frustrating to see America, which used to be innovative become the complacent corrupt incumbent.


I guess you didn't read about their Xeon Phi bi-directional ring architecture that achieved dozens of terrabytes/sec of bandwidth, their 512b vector-processor ISA, or, heck, just for giggles, the industry's first FinFet (and high-k dielectric) on p1270, p1272, and p1274. But, please, do go on.


Intel has tons of cool stuff going on, but from the outside it looks like a big risk to adopt any of it. Optane is pretty cool, but who will be surprised if they EOL it tomorrow? AVX512 is obviously of certain utility, but 99% of computers lack it. I myself just got access to a CPU with this feature a week ago. They've got a sweet programmable 100gbps NIC that they only ship into the mobile base station market. Why? KNL is a complete joke that nobody wanted or wants. Features that everybody wants, like UMWAIT, only exist on obscure parts you can't get. Having a lot of cool hacks isn't making a great deal of visible success for Intel.


And it was such an outstanding success! Intel's customers were able to get so much use out of it - it'll always be remembered as a high point in supercomputing.

Seriously - it was a resounding failure, even with Intel spending big bucks to convince this class of customers it was the future. There was little uptake and vast dissatisfaction with achieved results. IMHO I expect Xe to follow the same (very shallow) trajectory in HPC as in gamer graphics.

There is one remaining large machine (DUG "Bubba") using Xeon Phi and that's only an artifact of software tolerance combined with a _really_ great deal received on Intel's remaining "Knights Whatever" parts inventory.


Oh I completely agree. But success and innovation are orthogonal, and the original question was about innovation.


But whatever real innovation underlay Xeon Phi/KNx was performed two decades ago. Current Intel is only capable of SKUsmanship - minutely partitioning function for projected max margins.

Edit: Also too, remember that KNx was really an attempt to salvage Intel's failed Larrabee GPU effort.


> Also too, remember that KNx was really an attempt to salvage Intel's failed Larrabee GPU effort.

I do, and I don't think you know that the first "Knights" project was simply a rename of Larrabee. Same exact product. It was called Knights ... Landing I think? Then it became Xeon Phi.


Knights Ferry was the 1st. Unclear about the actual diffs between it and Larrabee silicon. I do know the 1st boards still had a video out port but it was disconnected.


Xeon Phi bi-directional ring architecture that achieved dozens of terrabytes/sec of bandwidth

...and yet, was it any better than a midrange GPU in compute?

Intel is the new IBM. IBM at the end of the 80’s was filled with smart people innovating piecemeal technologies that couldn’t be brought together to create compelling products. And the products that they were selling were expensive and technologically moribund.


Super, super weird take considering that Intel's main competitors are all within walking distance of their Santa Clara HQ.


Their only competitors that really matter are in Taiwan and South Korea. Fabs are harder than chip design.


Nope. That's were many people are wrong. Intel's main competitors are not in the US. They are in Taiwan and Korea. Intel's domination came not only because of a superior chip design, but due to the fact that only they could manufacture the superior chip design.

Now since that is not an issue anymore, you see Intel getting beaten everywhere. And that is only because Intel's actual competitors in Taiwan and Korea are have now surpassed Intel.


> And Apple converted. Steve Jobs invited Otellini on stage at Macworld to make the announcement.

In case anybody was wondering why Otellini was wearing a cleanroom suit on stage, it was a reference to Apple's 1998 "Toasted Bunny" commercial for the Power Macintosh G3: https://www.youtube.com/watch?v=e6PoLiXCA40

…which was itself a reference to the Pentium Ⅱ commercials of the time: https://www.youtube.com/watch?v=qvt65mHXmR4


This article assumes Apple was/is the only x86 game in town, and completely ignores AMD. A quick google says Apple own about 7% of the "PC market".


Grove embraced Christensen's paradigm for innovation and backed this learning with investment and the risk of cannibalisation inherent with Celeron.

It's easy to be wise after the fact, but we can now say that Grove or Otellini didn't take a big enough risk. Instead of risking cannibalisation of their core markets, they might have won a bigger prize over the longer term by straying further.

Desperate Gamers Camp Out in the Pandemic for $700 GPUs Months after the Nvidia GeForce RTX 3080's launch, the lines at brick-and-mortar stores across the country can still wrap around the block.

https://www.wired.com/story/nvidia-amd-demand-pandemic-lines...

C'est la vie.


My question is why Qualcomm while using the same arch, has not produced a chip as fast as the M1?


You're missing the wildly different target that Apple can follow.

Apple designs chips and devices in lockstep. When designers were working on various A-series chips, they knew the details of the device, and could optimise on that.

Qualcomm doesn't get that kind of view. They make a generic CPU design for given market segment, and have to "please" much wider buyer range. As such, if they tried to apply some of the things that are publicly known about Apple designs, they would probably end up with a chip that would be hard to sell due to taking more space or having higher TDP etc. And before advent of multicore, Qualcomm was using the available space budget heavily to integrate the radio functionality internally (then it oscillated a bit over time depending on chip and market segment).


Qualcomm is a virtual monopoly in the space it serves (mid to high end non-Apple non-Chinese phone chips); arguably it doesn't have that much incentive to do anything much.


And they are supposed to be a much shittier company than Intel, or Boeing or Oracle. You need to be really desperate to work there.


Nobody has mentioned it in the comments: because their money maker is in 5G modems (and of course LTE et al before). The ARM SoC is a bonus they throw on for their customers that need the integration (Android vendors, but not Apple).


I suspect there was some talent at apple that figured out some details which Qualcomm have not at this time. This guy seems to improve things at every place he has been. He left abruptly from Intel this year, which hopefully is not due to health reasons.

https://www.anandtech.com/show/15846/jim-keller-resigns-from...


Qualcom doesn't have a business model that involves high end processors. Who are they going to sell those SoCs to?


I don't think they have access to 5nm as Apple has pretty much monopolized it so far. And Apple out together a very very good design team.


Because their profitability is of the SOC alone, Apple, the entire product, so they are forced into certain design compromises.


They don't need to, they are in a much more down to earth market niche.


Ctrl-F "AMD"... 0 of 0

Wow, they don't even know exactly how disrupted they are...


Question: now that we know that Intel is dead and stuck on 14nm process why has the share price not tanked yet? I mean it should be trading on the future PE?


Optane seems to be their one bright spot engineering wise. But unfortunately they have priced it beyond what the market will bear.


Is it really that meaningful to talk about disruption if the product being disruptive isn’t available on a general market?


Great article. However, I doubt Apple is interested in intel’s server business. It will probably save the company.


Apple is worth 2 trillion, and TSMC $410 billion. I'd hardly call that "disruption".


OT: I can't help hearing that title in the voice of James Earl Jones as Darth Vader.


That's kinda sad, because Apple is much bigger than Intel. And it seems those bulldozers will never stop until there is only one company in the world - my bet the final battle will be between Google and Amazon.


That is amazing to see the innovator’s dilemma performance vs. time plot play out with Intel.

Does anyone know of a collection of other similar crossing plots on this topic? Would be really cool to study those...


Between all the buzzwords and Intel hate, what is the technical tldr of Intel's position? I understand they are about 1.5 gen behind TSMC, they should be at 7nm which will compare with TSMC 5nm. Also ARM is currently more power efficient. Any other fundamental difference between AMD/Apple/Intel arch.

If magically Intel 7nm gives awesome yield next year, will they be able to catch up with a bigger caches and Big.small arch. Will they need lot more help from Windows / Linux to achieve performance gain in lockstep like Apple does by controlling both OS and processor?


Industry behemoths do need near-death experiences from time to time. Intel is not yet near-death. They still have overwhelming market share and it still seems to them that they are "succeeding". Their future will depend on whether or not they choose to get high off their own PR supply.


What is disruption? I myself feel that the scope, and meaning of the term grows with each passing year.

This disruption, that disruption... what is of it?

The word is progressively loosing any meaning.


The article is discussing disruption as described in this book:

https://www.goodreads.com/book/show/2615.The_Innovator_s_Dil...

In the article, the chart from Christensen's early work about disruptive innovation, and the chart of Apple chips overtaking Intel chips in performance, do seem to have a striking resemblance.


Did Apple mislead Intel about the volume to get a "no"? That would be inline with hostile nature of this company. Profit above anything.


What percentage of pc unit sales do Mac's make up? Anybody seriously see that market share growing because of M1? Mac's have a lot of baggage to overcome, not saying they can't but by both amd and Intel are not work shy lightweights and many of the advantages of the M1 do not translate to the expandable platforms the pc world expects.


I think another thing to consider is AWS Graviton2 and that based off ARM as well.


Totally agree but Graviton, impressive though it is, doesn't have the aggregate advantage of M1, Intel is setup to respond to that challenge, M1, not so much.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: