Hacker News new | past | comments | ask | show | jobs | submit login

5 and 10 year plans are always over optimistic projections by people who can't deliver in the present to reassure investors faith.

They might ship 1.4nm, but it has a good chance of having Soviet tractor quality.




I usually hear the opposite: people tend to underestimate what they can do in 10 years and overestimate what they can do in 1-3


You might want to check out intels historical roadmaps. They've been missing their own marks since about 2015 iirc.


Sounds broadly consistent with the parent poster’s point?


Not if they meant they missed their 2005 roadmap for 2015.


Someone please correct me if I'm wrong.

Intel unveiled their infamous "Tick-Tock Model" around 2007 [1]. It went according to plan for all of four years, and then it COMPLETELY fell apart. If anything, I'm willing to bet they KNEW they could hit the first few iterations. I'm guessing for hardware, you've probably got a really good idea if you're going to be able to even manufacture something in two years, let alone mass-distribute, produce, and sell it at the price point you want. I'm also pretty certain they KNEW they wouldn't hit the rest of the roadmap.

Honestly, I think it was purposefully misleading investors. I heard from dozens of engineers at the company around 2008 that there was NO WAY they would have 10nm chips around 2012 -- what the roadmap was more or less promising. And surprise, we didn't get them until 2018. Now they're promising 1.5nm in a similar time frame. I'm skeptical.

[1] https://en.wikipedia.org/wiki/Tick%E2%80%93tock_model


> I heard from dozens of engineers at the company around 2008 that there was NO WAY they would have 10nm chips around 2012 -- what the roadmap was more or less promising.

Where did they promise 10nm in 2012? This presentation from 2011 shows 10nm in 2017: https://www.nextbigfuture.com/2011/06/intel-roadmap-from-jun... and in 2011 tick-tock was still going strong.

I think you messed up your math. Tick/tock was a process shrink every 2-3 years. Using the more aggressive 2 year cadence:

    45 nm – 2007
    32 nm – 2009
    22 nm – 2011
    14 nm – 2013
    10 nm – 2015
Using a more conservative 3 year cadence:

    45 nm – 2007
    32 nm – 2010
    22 nm – 2013
    14 nm – 2016
    10 nm – 2019
And if we look at what actually happened:

    65 nm – 2005
    45 nm – 2007
    32 nm – 2010
    22 nm – 2012
    14 nm – 2014
    10 nm – 2018/2019
(Cannon Lake-U 10nm technically shipped in 2018, but I don't think anyone really considers it volume-enough to count?)

They pretty much nailed tick/tock flawlessly up until 10nm, 10 years out from when tick/tock was first announced. Expecting perfect 10 year predictions is some insane expectations for any company/person. There's no way in hell tick/tock's 2007 unveil could possibly be considered "misleading investors."


>Where did they promise 10nm in 2012? This presentation from 2011 shows 10nm in 2017:

Your link shows 7nm in 2017, 10nm was for 2015.

>Tick/tock was a process shrink every 2-3 years. Using the more aggressive 2 year cadence:

You are confusing "Tick Tock" with "Process, Architecture, Optimization". Tick Tock is strictly 2 years cadence.

So yes 10nm missed by a large margin.

> 10 nm – 2018/2019

Intel has been making 10nm chip irrespective of yield, the current batch were months of stock piling chip before the rush to roll out in Xmas. In reality they barely got it out of the gate in 2019. And if you count Cannon-Lake as 2018, you might as well count TSMC 5nm in 2019.

>There's no way in hell tick/tock's 2007 unveil could possibly be considered "misleading investors."

There were not misleading in 2007, the executed their plan flawlessly, Intel had decent people back then. Pat Gelsinger left in 2009. It was still doing great up to 2012, Otellini retired, BK became CEO in 2013, still promising Tick Tock. That is the point where misleading investor began.

And I forgot to mention during All investor meetings Intel continue to reiterate 10nm is on track all the way until BK was gone. If that is not "misleading investors" I am not sure what is.


Flawlessly?

Intel 14nm's beginnings were far from flawless, even if it's nowhere near Intel 10nm's issues.


That's a little harsh on the history of the Tick-Tock roadmap. It didn't have its first real hiccup until 2014 with Haswell Refresh v Broadwell, and then Skylake was delivered on time a year later. It was Intel's complete failure to follow-up Skylake with a process shrink "Tick" that was—and still is—Intel's big problem. So it really worked out alright for Intel for about 8 years before hitting a wall.


Big lies seem self-reinforcing, particularly in public companies.

Or to put it another way, nobody is going to charge you with a crime for staying with the herd. That's a passive choice.

But when there's a dissenting voice, suddenly you have to make an active decision to ignore them. And that's when the lawsuits start producing emails about who knew what when.

End result: People intentionally (if they know better) or unintentionally (if people smarter than them are all saying the same thing) agree with the party line, even in the face of demonstrable facts otherwise.


But if they did mean that then they're mistaken. Intel just about nailed the 10-year prediction from 2005 to 2015, hitting a shrink every 2-3 years:

    65 nm – 2005
    45 nm – 2007
    32 nm – 2010
    22 nm – 2012
    14 nm – 2014


It’s easy to rag on Intel, but the market changed for them pretty fundamentally. Customers ultimately rule.

On the server side, big hyperscale datacenter customers have large, narrow purchase patterns. They are upending the market — ask around and figure out how many HPE or Dell CEs are still around servicing servers these days.

On the client side, similar patterns exist at a smaller scale. At the higher end Apple probably does 90% of their Intel business with like 10 SKUs. At the lower end, there’s a huge demand for cheap, and many companies skipped refresh cycles.

This was impactful imo as the old ways of dealing with manufacturing issues (sell underclocked parts, etc) are harder when Amazon has prepaid for 30 million units of SKU x.


People maybe, but institutions and organizations go the other way.

Human capital turnover, market trends, butterflies in Thailand all conspire to foil grand organizational visions.


But even if Intel isn't the one to do it, someone will. And given that AMD is just using contract fabs, it means we will have x86-64 chips with these technologies at roughly these times.

I wonder if a x86-64 CPU will exceed 1024 cores before the end 2029? It feels like that is where we are headed.


Depends on whether there's market demand.

If there's something useful about having 32 32-core chiplets connected to an I/O hub talking to a bunch of PCIe5 lanes and a huge pile of DDR6 RAM, that could happen.

But getting all that compute logic coordinated in one place might not be that economically desirable compared to offloading to GPU-style specialized parallel processors, or building better coordination software to run distributed systems, or...


> connected to an I/O hub talking to a bunch of PCIe5 lanes and a huge pile of DDR6 RAM

So, basically a mainframe on a chip?

Honestly, how would that work? Is all that memory coherent? There are a couple of reasons why mainframes cost so much, and some of them are technical.

I keep expecting servers to evolve into a multi-CPU non-coherent RAM, but the industry keeps doubling down on coherent RAM¹. At some point servers will turn into a single-board blade hack (that you can mount on a blade hack, piled on a hack), I wonder for how long CPU designers can sustain our current architecture.

1 - Turns out people working full time on the problem have more insight on it than me, go figure.


Bigger servers just get divided up into more VMs. As long as it's cost effective to scale vertically, it will continue to happen at the cloud platform level.


It'll be interesting to see what these high core count servers do to The Cloud though.

Fifteen years ago a lot of medium-sized organizations had a full rack of single and dual-core servers in their offices that cost half a million dollars and consumed >10KW of electricity day and night.

That made it attractive to put everything in the cloud -- spend $250K on cloud services instead of $500K on local hardware and you're ahead.

But their loads haven't necessarily changed a lot since then, and we're now at the point where you can replace that whole rack with a single one of these high core count beasts with capacity to spare. Then you're back to having the latency and bandwidth of servers on the same LAN as your users instead of having to go out to the internet, not having to pay for bandwidth (on both ends), not having to maintain a separate network infrastructure for The Cloud that uses different systems and interfaces than the ones you use for your offices, etc.

People might soon figure out that it's now less expensive to buy one local server once every five years.


I think it's always been vastly cheaper for the hardware. But it gets more complex when you look at the TCO. With the in house server you need at least two, in case one fails. Although now probably everyone is running their own mini cloud with kubernetes, even in the cloud - and that should make it relatively cheap.

Then you need someone to plan, provision, troubleshoot, and maintain the physical servers. So at best a full time fully loaded position which costs the company roughly 2x the salary. And that's only if you know your workload so well that you can guarantee the shape of your hardware usage 3-5 years out. Rarely possible in practice.

I'd say always start with the cloud, and you'll know if or when you could do (part) of it cheaper yourself.


> With the in house server you need at least two, in case one fails.

That doesn't really change much when the difference is a four figure sum spread over five years.

> Then you need someone to plan, provision, troubleshoot, and maintain the physical servers. So at best a full time fully loaded position which costs the company roughly 2x the salary.

Would it really take a full time position to maintain two physical servers? That's a day or two for initial installation and configuration which gets amortized over the full lifetime, OS updates managed by the same system you need in any case for the guests, maybe an hour a year if you get a power supply or drive failure.

If the maintenance on two physical machines add up to a full week out of the year for the person already maintaining the guests, something has gone terribly wrong. Which itself gets balanced against the time it would take the same person to configure and interact with the cloud vendor's provisioning system -- probably not a huge difference in the time commitment.

> And that's only if you know your workload so well that you can guarantee the shape of your hardware usage 3-5 years out. Rarely possible in practice.

Most companies will do about the same business this year as they did last year plus or minus a few percent, so it's really the common case. And if you unexpectedly grow 200% one year then you use some of that unexpected revenue to buy a third server.

Where the scalability could really help is if you could grow 200,000% overnight, but that's not really a relevant scenario to your average company operating a shopping mall or a steel mill.


Certainly if you have someone already on the payroll who can take responsibility for the hardware part time, that makes it a lot cheaper. That's a different skill set though, so it's not true for every company.

With respect to changing workloads, I wasn't thinking so much about scale, which I think isn't that hard to plan for, but more about changing requirements. If you add, remove, or change a piece of your stack the cloud gives a lot of flexibility. Add memcached, no problem, spin up some high mem instances. Need more IO on the database server, switch to an instance with fast SSDs, or a bigger instance. I think those kinds of changes are common and hard to plan for. Until it happens you probably don't know if you are disk, network, memory, or CPU bound.

Once your stack is sufficiently mature and not changing much the workload gets a lot more predictable. The cloud is really good for starting out. The danger is it's also really good at locking you in, then you are stuck with it.


> Certainly if you have someone already on the payroll who can take responsibility for the hardware part time, that makes it a lot cheaper. That's a different skill set though, so it's not true for every company.

True, though all the physical hardware stuff is pretty straight forward, to the point that anybody competent could figure it out in real time just by looking at the pictures in the manual. Configuring a hypervisor is the main thing you actually have to learn, and that's a fundamentally similar skillset to systems administration for the guests. Or for that matter the cloud vendor's provisioning interface. It's just different tooling.

> If you add, remove, or change a piece of your stack the cloud gives a lot of flexibility. Add memcached, no problem, spin up some high mem instances. Need more IO on the database server, switch to an instance with fast SSDs, or a bigger instance. I think those kinds of changes are common and hard to plan for. Until it happens you probably don't know if you are disk, network, memory, or CPU bound.

I see what you're saying.

My point would be that the hardware cost is now so low that it doesn't really matter. You may not be able to predict whether 8 cores will be enough, but the Epyc 7452 at $2025 has 32. 256GB more server memory is below $1000. Enterprise SSDs are below $200/TB. 10Gbps network ports are below $100/port.

If you don't know what you need you could spec the thing to be able to handle anything you might reasonably want to throw at it and still not be spending all that much money, even ignoring the possibility of upgrading the hardware as needed.

> Once your stack is sufficiently mature and not changing much the workload gets a lot more predictable. The cloud is really good for starting out. The danger is it's also really good at locking you in, then you are stuck with it.

Right. And the cloud advantage when you're starting out is directly proportional to the cost of the hardware you might need to buy in the alternative at a time when you're not sure you'll actually need it. But as the cost per unit performance of the hardware comes down, that advantage is evaporating.


I don't think the hardware is that trivial. I used to think that, but I've learned a lot more respect for the people who understand how to troubleshoot it, how to purchase compatible components, and spot troublesome products and brands. It's a whole career, it has its nuances.

But you make some good points. In general I agree it's cheaper to vastly over provision than to use the cloud. And you can do things like build an insane IO system for your database, which you can only sort of do in the cloud.

Of course this is an advantage for hosting internal company stuff, for web facing things you may need to place hardware in remote datacenters, and then you do need people on location on call who can service it. You have to generally have much larger scale for that to make sense. Even Netflix, because of the variability of their load still use a combination of cloud and their own hardware.


> I don't think the hardware is that trivial. I used to think that, but I've learned a lot more respect for the people who understand how to troubleshoot it, how to purchase compatible components, and spot troublesome products and brands. It's a whole career, it has its nuances.

I didn't mean to suggest there isn't a skillset there. And that's really important when you're doing it at scale. The person who knows what they're doing can do it in a fifth of the time -- they don't have to consult the manual because they already know the answer, they don't have to spend time exchanging an incompatible part for the right one.

But when you're talking about an amount of work that would take the expert three hours a year, having it take the novice fifteen hours a year is not such a big deal.

> Of course this is an advantage for hosting internal company stuff, for web facing things you may need to place hardware in remote datacenters, and then you do need people on location on call who can service it. You have to generally have much larger scale for that to make sense.

On the other hand you have to have rather larger scale to even need a remote datacenter. A local business that measures its web traffic in seconds per hit rather than hits per second hardly needs to be colocated at a peering exchange.

It's really when you get to larger scales that shared hosting starts to get interesting again. Because on the one hand you can use your scale to negotiate better rates, and on the other hand your expenses start to get large enough that a few percent efficiency gain from being able to sell the idle capacity to someone else starts to look like real money again.


The Xeon Phi[1] CPUs were already up to 72c/288t in 2016, if the market demanded it I'm sure Intel could built it, but more than likely the high core counts are a GPU/Compute card's territory until ARM scales it's way up that high.

[1]https://en.wikipedia.org/wiki/Xeon_Phi


Intel promised 10 Ghz by 2005, if I remember correctly.


Tbf, IPC went up by enough that that effectively happened.


I actually heard and memorized the opposite : people tend to overestimate what they can do in 10 years (for example, in 1980, some people thought flying cars would be a thing in year 2000) and they underestimate what happens in 1-3 (for example, our smartphones or computers get twice as fast, but the change is so incremental that people don't notice it)


Interesting bias indeed. medium term ~ambition versus long term patience


Processors may get smaller, but the size of dust remains the same.

The advances needed to achieve something like this are more in the realm of cleanroom advances than processor architecture.


Soviet tractor quality wasn't that bad, especially considering they shared a lot with tanks. And for the price they were likely even better :) . Not to mention that they came with capabilities to be repaired - something Intel in its chips will likely, and sadly, not include...


> Soviet tractor quality wasn't that bad, especially considering they shared a lot with tanks.

Fun fact: The first tank (turreted, not land-whale) was based on a tractor design.

See: https://en.wikipedia.org/wiki/Renault_FT


What is a land-whale tank?


The kind of tank design that appeared in World War I. They were designed to cross the cratered wastelands and trenches created by the fighting. The crews fought from gun ports and mounted cannons/machineguns, rather than a rotating turret.

Examples:

- https://en.wikipedia.org/wiki/Mark_IV_tank - https://en.wikipedia.org/wiki/Mark_V_tank


Socialist design is generally pretty good, in my experience. I preferentially buy DDR-made tools when it makes sense, because they tend to be very solid, and very easy to maintain and repair. The furniture can also be nice.


Interesting analogy. Soviet tractors famously were slow to arrive.


And half didn't work / only met the specifications in appearance, but not so much functionality


And were typically indestructible once they did run. Soviet pre-fall engineering was ugly and the factories sucked but the designs were solid and could run for ever with very little maintenance.


Though CPUs are already all extremely durable. So an under-specced indestructible CPU is... just an under-specced CPU.


OOC, do you view Intel's projections as being too slow or just plain wrong (i.e., impossible)? I am not knowledgeable about this industry or the engineering behind it, just follow it casually, and I'm shocked to see 1.4nm on a slide.


Why do investors keep believing it?


Their P/E ratio is 13.36. Perhaps they don't?

The slide seems to have been first disclosed by ASML, which has a vested interest in selling new generations of manufacturing tools.

ASML's P/E ratio is 48.02.


I don't know that they actually are. Like most of the S&P500, Intel is actively participating in billion-dollar stock buyback programs. These allow for these companies with scrooge-mcduck-scale money bins to maintain vanity stock prices while their fundamentals are on fire.



Intel has generated $19.3 billion in net income in the prior four quarters, giving it a whopping 12.9 PE ratio over that time.

What vanity stock price exactly? You could hardly have picked a worse example in this bubbly market.


It's a pretty rich multiple if one believes their fundamentals have been deteriorating. Intel has spent the last decade abandoning market after market while simultaneously not being able to execute in the markets they were still in. If something doesn't change soon, even that 12.9 p/e won't be sustainable.


Investors believe that the company will be profitable. It doesn't really matter to them if a company has a habit of overpromising or underpromising.


What choice do they have besides selling their stake and leaving?


are they? maybe if you push people really hard (completely unrealistic) they will still deliver much more than otherwise ("just" unrealistic). You might not land a rocket on Mars, but you can still land it on water...


Thinking small is a self-fulfilling prophecy, just as work fills allocated time.


Honestly who even needs a 1.4nm? Hell, who needs a damned 10nm? Most people could get by on yester-decade’s CPUs no problem.

We need something new, not the same old “add more cameras to the phone” kind of innovations.

It’s the engineers that will make those discoveries—the investors can shove it. All they do is freeload on innovation and cramp peoples style.


Even if you don't need the power (and you sometimes do, and even if you don't right now there'll be some electron app that will make you wish you had a 1.4nm CPU eventually) improving the process usually means better efficiency too which is important to make batteries last longer and datacenter less wasteful. It'll probably make for some good looking videogames too.


You're thinking is flawed. Consumers don't give a damn for the most part, but Intel isn't building this tech for them. They're doing this for enterprise data centers which demand smaller, more powerful, and energy efficient chips constantly.


Consumers also care about getting better energy efficiency out of their battery powered devices. It's really just the desktop PC market where being stuck with 14nm CPUs is a non-issue, because wall power is cheap and cooling is easy.


I agree, they care about battery performance, but they don't understand the correlation of CPU to battery so they don't purchase based on that criteria.

The vendor that makes the device definitely cares though I wonder how much Intel cares about consumer oroducts as percentage of mkt share compared to servers.


Data centers absolutely care about power usage.


That's what I said...


I’d like a higher clock frequency, as the software I use does not scale well across cores. Things have been stuck sub 4 GHz for many years.


I like to write emulators as a hobby so I definitely feel your pain, but I'm not holding my breath for a significant frequency boost in the near future. A 4GHz CPU means that every cycle light itself only moves about 7.5cm in a vacuum. If you have a fiber optic cable running at 10GHz at any given time you effectively have one bit "stored" in the fiber every 3cm or so.It's frankly insane that we manage to have such technology massed produced and in everybody's pocket nowadays. My "cheap" android phone's CPU runs at 2.2GHz, 2.2 billion cycles per second.

We can probably still increase the frequencies a bit but we definitely seem pretty close to some fundamental limit in our current understanding of physics. The frequency doublings every other year we experienced until the early 2000's are long gone I'm afraid, and they might never come back until we manage to make a breakthrough discovery in fundamental physics.


I think it’s more of a power dissipation issue. The amount of charge, thus current, you are moving in and out of the gate capacitance is proportional to clock frequency. Sine power is I^2*R, then it is proportional to f^2.

Smaller transistors reduce the I, but R goes up with smaller interconnects. The RC time constant also adds delay, probably more so than length.

That being said, 3D stacking won’t help with heat, and dielets won’t help with delay. I rather have 4 cores at 10 GHz than 64 cores at 3 GHz.


You can probably rewrite the codebase to utilize n threads before anyone releases an 8, 12, 36ghz CPU.


It’s electromagnetic simulation, specifically finite element. You can parallelize some of the math, but mostly not. You can break the structure into smaller sub-domains, but that has issues too. Not much gain beyond 2-4 cores.


Not my area of expertise, but I was under the impression that finite element analysis, like other sparse algebra problems, are reasonably well suited for GPUs, which are much more parallel than 2 or 3 cores. Have you looked into that?


The time domain codes work well with GPUs and multiple cores, but the frequency domain ones don’t. I don’t know enough of what’s going on under the hood, but it’s like that for all of them.


I've worked with applied math PDE people and they use supercomputers to full effect. Granted it's a real pain and your cross connect bandwidth matters (hence supercomputer), but you can scale up pretty well.


I thought FE was mostly memory bandwidth bound?


Everyone wants faster CPU cores. Can you imagine how much simpler it would be to just program a 40GHz processor instead of writing a program that supports 10 4Ghz processor cores?


I might not need more improvement than today's state of the art Consumer CPU. But even the best in class GPU are not over powered for gaming. With Ray Tracing and 4K I could easily use another 4 - 8x transistor density.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: