Hacker News new | past | comments | ask | show | jobs | submit login

Honestly who even needs a 1.4nm? Hell, who needs a damned 10nm? Most people could get by on yester-decade’s CPUs no problem.

We need something new, not the same old “add more cameras to the phone” kind of innovations.

It’s the engineers that will make those discoveries—the investors can shove it. All they do is freeload on innovation and cramp peoples style.




Even if you don't need the power (and you sometimes do, and even if you don't right now there'll be some electron app that will make you wish you had a 1.4nm CPU eventually) improving the process usually means better efficiency too which is important to make batteries last longer and datacenter less wasteful. It'll probably make for some good looking videogames too.


You're thinking is flawed. Consumers don't give a damn for the most part, but Intel isn't building this tech for them. They're doing this for enterprise data centers which demand smaller, more powerful, and energy efficient chips constantly.


Consumers also care about getting better energy efficiency out of their battery powered devices. It's really just the desktop PC market where being stuck with 14nm CPUs is a non-issue, because wall power is cheap and cooling is easy.


I agree, they care about battery performance, but they don't understand the correlation of CPU to battery so they don't purchase based on that criteria.

The vendor that makes the device definitely cares though I wonder how much Intel cares about consumer oroducts as percentage of mkt share compared to servers.


Data centers absolutely care about power usage.


That's what I said...


I’d like a higher clock frequency, as the software I use does not scale well across cores. Things have been stuck sub 4 GHz for many years.


I like to write emulators as a hobby so I definitely feel your pain, but I'm not holding my breath for a significant frequency boost in the near future. A 4GHz CPU means that every cycle light itself only moves about 7.5cm in a vacuum. If you have a fiber optic cable running at 10GHz at any given time you effectively have one bit "stored" in the fiber every 3cm or so.It's frankly insane that we manage to have such technology massed produced and in everybody's pocket nowadays. My "cheap" android phone's CPU runs at 2.2GHz, 2.2 billion cycles per second.

We can probably still increase the frequencies a bit but we definitely seem pretty close to some fundamental limit in our current understanding of physics. The frequency doublings every other year we experienced until the early 2000's are long gone I'm afraid, and they might never come back until we manage to make a breakthrough discovery in fundamental physics.


I think it’s more of a power dissipation issue. The amount of charge, thus current, you are moving in and out of the gate capacitance is proportional to clock frequency. Sine power is I^2*R, then it is proportional to f^2.

Smaller transistors reduce the I, but R goes up with smaller interconnects. The RC time constant also adds delay, probably more so than length.

That being said, 3D stacking won’t help with heat, and dielets won’t help with delay. I rather have 4 cores at 10 GHz than 64 cores at 3 GHz.


You can probably rewrite the codebase to utilize n threads before anyone releases an 8, 12, 36ghz CPU.


It’s electromagnetic simulation, specifically finite element. You can parallelize some of the math, but mostly not. You can break the structure into smaller sub-domains, but that has issues too. Not much gain beyond 2-4 cores.


Not my area of expertise, but I was under the impression that finite element analysis, like other sparse algebra problems, are reasonably well suited for GPUs, which are much more parallel than 2 or 3 cores. Have you looked into that?


The time domain codes work well with GPUs and multiple cores, but the frequency domain ones don’t. I don’t know enough of what’s going on under the hood, but it’s like that for all of them.


I've worked with applied math PDE people and they use supercomputers to full effect. Granted it's a real pain and your cross connect bandwidth matters (hence supercomputer), but you can scale up pretty well.


I thought FE was mostly memory bandwidth bound?


Everyone wants faster CPU cores. Can you imagine how much simpler it would be to just program a 40GHz processor instead of writing a program that supports 10 4Ghz processor cores?


I might not need more improvement than today's state of the art Consumer CPU. But even the best in class GPU are not over powered for gaming. With Ray Tracing and 4K I could easily use another 4 - 8x transistor density.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: