Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is Moore's Law over, or not?
54 points by larsiusprime on May 19, 2023 | hide | past | favorite | 89 comments
Every once in a while I see an article that seems to claim that Moore's Law is over, or slowing down, or about to be over. Then I see some counter-claim, that no, if you account for added cores, or GPUs, or some other third thing, that actually it's still right on track. This cycle has repeated every year for like the past 10 years, but the last few years feel like things have really started to slow down. Maybe that was partially illusory with the chip slowdown from the pandemic, but I figure now that we're several years out we should be able to say for sure.

It also seems like a pretty important question to answer because it has big implications for the advancement of AI technology which has everyone so freaked out.

So what's the consensus around here? Is Moore's Law actually over yet, or not?




Earlier today on HN there was a submission about great CPU stagnation. In the blog post was an interesting link: https://raw.githubusercontent.com/karlrupp/microprocessor-tr...

This graph to me show that while yes technically Moore's law of doubling transistor per "thing Intel or AMD sells you" is still holding, it has ended for single threaded workloads. Moore's law is only holding due to core count increase.

For everyday use of users running multiple simple programs/apps, that's fine. But for truly compute heavy workloads (think a CAD software or any other heavy processing), developers turned to GPUs to get the compute power improvements.

Writing amazing programs taking full advantage of the core count increase is simply impossible (see Amdahl's law). So even if one wanted to rearchitect programs to take full advantage of the overall transistor count from ~2005 to now, they won't be able to.

Compare with pre-2005, where one just had to sit & wait too see their CPU-heavy workloads improve... It's definitely a different era of compute improvements


My most CPU-intensive usage is compilation and "make -j8" is doing a pretty good job at scaling things to fully utilize my CPU. It's not about one program using all your resources.


Linking is still super slow, especially if you use link time code generation.


LTO basically just calls back into the compiler and is multi-threaded on lld and mold at least, and probably anything else that supports LTO. And make sure you're using ThinLTO, it's still expensive but much faster than full LTO.


Have you tried the mold linker? I've been using it and it works pretty well, quite fast.

https://github.com/rui314/mold


>Writing amazing programs taking full advantage of the core count increase is simply impossible (see Amdahl's law). So even if one wanted to rearchitect programs to take full advantage of the overall transistor count from ~2005 to now, they won't be able to.

I don]t think that's the goal right now, though - it's about parallelizing multiple separate tasks/programs/etc: IOW, you can run 50 applications on a single 64 core CPU that used to take 50 individual servers


Not many consumers are running 50 applications at once on their home servers.


>Writing amazing programs taking full advantage of the core count increase is simply impossible (see Amdahl's law).

Do you need theoretical full?

Practical full is enough and still huge improvement over single core


That’s up the the program — if the non-parallelisable parts are big, then you won’t get big speedups no matter what — GPUs are used for tasks that are widely parallelisable, GUIs (as in desktop environments) won’t really benefit from a 10x faster GPU.


> Writing amazing programs taking full advantage of the core count increase is simply impossible (see Amdahl's law)

I don't see this as true at all, using Wikipedias example -

"a program needs 20 hours to complete using a single thread, but a one-hour portion of the program cannot be parallelized, therefore only the remaining 19 hours execution time can be parallelized, then regardless of how many threads the minimum execution time is always more than 1 hour"

Just increase "19 hours" to "400 hours" execution time, it's possible to increase output as far as you want.

If you're mind set can't get past the "one hour" matters sure or change your mind set to the "19 hours" part of the calculation is what matters.

I don't fully get Gustafson's Law, but maybe it touches on this.


>Just increase "19 hours" to "400 hours" execution time, it's possible to increase output as far as you want.

Yeah, sure. Just give up on the thing you wanted to do and instead do something entirely different that is more parallelizable. That'll keep your CPU utilization high, but it won't actually accomplish your goal (presuming you were using software for some purpose other than maximizing CPU utilization).


Nvidia thinks that Moore's Law is dead. https://arstechnica.com/gaming/2022/09/do-expensive-nvidia-g...

Intel, by contrast, says that Moore's Law is still alive. But Intel is technologically behind, and it is easier to improve when there is someone to learn from, so maybe there is a wall that they haven't yet hit.

Regardless, it is a very different law than when I was young, when code just magically got faster each year. Now we can run more and more things at once, but the timing for an individual single-threaded computation hasn't really improved in the last 15 years.


> but the timing for an individual single-threaded computation hasn't really improved in the last 15 years

The clock speeds haven't really gone up anymore, but computations still got considerably faster. From an i7 2700k (2011) to an i7 13700k single core benchmark scores went up 131%

https://cpu.userbenchmark.com/Compare/Intel-Core-i7-2700K-vs...


First, that's the kind of change we used to get in 2 years. Having it happen over a decade is barely noticeable compared to where we used to be.

Second, over that time period we've had a lot of changes in tooling. Some make code faster. Most make code slower. (Examples include the spread of containerization, and adoption of slow languages like Python.) The result is that programs to do equivalent things might wind up actually faster or slower, no matter what a CPU benchmark shows.


Yeah, the original Pentium released at 60 MHz in 1993, and ten years later there were 3 GHz Pentium 4s. The '90s were pretty nuts.


If you read the article, it is clear Jensen Huang is commenting only on the price aspect. It used to be the case that as density goes up, price goes down. Huang is saying that price is no longer going down, and he is probably right. But density is going up. It is even unclear whether it is slowing down.


Nvidia is trying to reset expectations of consumers.

Jensen aims to charge more for more GPU computing power into the future.

This is because Nvidia has close to monopoly power this is able to break Moores Law single handedly.


Hell, they are able to sell graphic cards, well not nVidia themselves but still, in excess of 2k bucks. They'd be stupid not to try to keep prices that high, now the market seemlingy accept them.


They’re losing a lot of customer love.

IMO if Intel stay in the game it’ll be sorted out within a few years.

Nvidia may have good software but people like paying less money.

Paying less will win out.


The 4090 sold strong at first, but the rest of Nvidia’s 40-series offerings haven’t sold nearly as well. It’s still not clear if their attempt to reset expectations toward higher prices will work out in the long term, but in the short term it’s been a bit of a failure.


They might be putting themselves in a similar position as Intel though (maybe worse since I don’t recall Intel ever being as greedy..) if their competitors eventually catch up.

Unlike in gaming in the data center initial cost + performance per watt are the only thing that really matter (besides software, Nvidia has a huge moat there..). And in relation to how much Nvidia is charging per GPU total power costs are close to zero.. So 4 ‘worse’ but much cheaper chips might be a better deal than buying an A/H100 etc.


> Unlike in gaming in the data center initial cost + performance per watt are the only thing that really matter

That's not true from a hardware perspective either. You can't just plug in 4 worse cards in the same rack. The savings on the graphics cards become less significant if you need to double/quadruple all other hardware to increase the number of racks. A 1U blade can easily cost $10000 without a graphics card.


You can plug in multiple GPUs on a computer. AWS has fleets with 4 and 8, see https://docs.aws.amazon.com/dlami/latest/devguide/gpu.html for pricing.

For anything datacenter related, customers are very sensitive to price per performance. And datacenters are happy to oblige.


You're missing the point GP made though. He wants to replace one good card with 4 worse ones. It's not like that rack has 3 or 7 additional slots just unused. They're also already taken by the setup. And in the link you provided 5 out of 6 offerings are still Nvidia GPUs.

Of course data centers are happy to oblige to customer demands, but initial cost per GPU and performance per watt are certainly not the only relevant factors.


I can build a GPUless microATX tower for $600 and the 3sqft of extra space costs $400. Somebody is overcharging you for 1U blades.


But does Nvidia have their own AMD-like foe waiting around the corner to fight them for the crown? I'm not sure there is in the near future anyone capable to sustain such a fight...


I don't understand the part about the market accepting the price, or rather, I find it hard to believe that it's sustainable. I've played PC games my whole life and used to enjoy building and re-building my gaming PC every so often. Paying 2k for a single piece of hardware just doesn't seem like the right choice anymore. Makes more sense to buy a console (or two) these days.


With little progress in performance, they're gonna compete with secondary market aka used devices.


interesting take

otoh it can be said that gpu's just look like they improved longer than cpu's because their workload is vastly more susceptible to parallelism.


Nvidia and Intel are in very different markets and their product ranges barely overlap.

And well Nvidia is almost a monopoly at this point so they have barely any incentives to continue innovating as opposed trying to extract as much money as possible from their clients.

On the other hand look at what happened with CPUs over the last few years. Huge improvements in efficiency (including Intel)

> hasn't really improved in the last 15 years.

I don’t think that’s even close to being the case in almost all use cases. Increasing complexity/bloat has obfuscated that to a large degree though.


Dennard Scaling is dead https://en.wikipedia.org/wiki/Dennard_scaling which is why things stopped magically getting faster (chips started getting too hot)


> Intel is technologically behind, and it is easier to improve when there is someone to learn from

This is interesting to me, how did they end up like this?


By being so much ahead of everyone else that they thought that didn’t have to do anything anymore.

On desktop, anyway. Mobile is a different store, back in the mid 2000s they had everything to dominate the market for the next 10+ years (e.g. the fastest ARM chips) yet choose not to due to reasons..


They put BK as CEO and hired that Murthy guy.


I think it's worth mentioning, that "Moore's Law" is not actually a "law". It's just an observation of a historical trend.

Moore posited in 1965 that the the amount of transistor per chip will roughly double every year - something he himself called "a wild extrapolation" in a later interview.

Actual development speed proved slower than that, so in 1975 he revised his prediction to transistors doubling every two years - so, the original "Moore's Law" was already dead by then. The second revision of his prediction proved more long-lived, in part because manufacturers were actively planning their development and releases around fulfilling that expectation - making it sorta a self-fulfilling prophecy.

There was another slow down in 2010 though - with actual development falling behind "schedule" since then.

But neither the "doubling" - nor the "year" or "two years" were ever anywhere near precise to begin with, so the question "is it dead" depends highly on how much leeway you are willing to give.

If you demand strict doubling every year - that's been dead since before 1975.

If you demand roughly doubling every two years - that's probably mostly dead since 2010.

If you allow for basically exponential growth but with a tiny bit of slow down over time - then it's still alive and well.

There can be no precise answer to the question - since the whole prediction was so imprecise to begin with. I don't think there's any benefit to getting hung up on drawing a supposedly precise line in the sand...


A law is something which is observed to be true. We're not talking about a legal law/rule that's enforced. In that sense it's one and the same.


A law of physics or nature, is something that HAS to be that way, and could not be different.


Like Newton's laws of motion? They're only observed to be true and ceases to be at relativistic speeds.


Moore's law is still going strong. But when people talk about Moore's Law ending they normally mean a wider set of trends, specifically:

- Transistor count doubles every ~24 months (Moore's law) - still going strong

- Total power stays ~constant (Dennard Scaling) - no longer holds

- Total cost stays ~constant - don't know if there is a name for this, but it no longer holds either

The real magic was when you had all three trends working together, you would get more transistors for the same power and same total cost. As the last two trends have fallen off, the benefit of Moore's law by itself is reduced.

There is still some effect of the last two trends, power per transistor and cost per transistor are still dropping, just at a slower rate than density is increasing. So power budgets and costs continue to grow. Hence 450W GPUs and 400W CPUs.


I worked at Intel when the first mobile Pentiums were being developed. Back then, gate oxides were 10-12 atoms thick. That was nearly 30 years ago and feature size was 350nm or .35 micron.

Today's 3nm processes use 3 dimensional gates that have film thicknesses on the order of 5 to 8 atoms thick and the features size is smaller than the wavelength of light used to measure expose wafers' different mask reticles that rely on using light slit interference to make features smaller than the EUV wavelength of around 10nm.

To get much smaller than 1nm using these techniques is going to run into fundamental physical limits in a decade and probably that limit will be around .5nm feature size.

The next frontier in silicon will be building three dimensional chips and IBM is a pioneer in 3D stacking of CMOS gates.


Transistor is scaling, for now. SRAM is not! TSMC N3E SRAM density is equal to N5 SRAM density. This is something of an inflection point.

https://www.tomshardware.com/news/no-sram-scaling-implies-on...


I believe this report by WikiChip from IEDM 2022 is an accurate summary of the situation:

> While there were a great number of interesting papers from both academia and industry, it was the one by TSMC that brought frighteningly bad news: whereas logic is still scaling more-or-less along the historical trendline, SRAM scaling appears to have completely collapsed.

https://fuse.wikichip.org/news/7343/iedm-2022-did-we-just-wi...


That is because N3E was suppose to be on GAAFET and they had to push it back to N2 while reworking on N3E. You will get some SRAM scaling again on N2 and 14A.


You mean plateau.


Dead, but we're heading into new a new paradigm of hardware accelerators (encoders, decoders, AI-optimized chips, multi-die GPUs, ...) and new packaging systems (M1 SoC's, GPU Chiplets, ...), all powered by what was previously the big bottleneck, interconnection within these components (AMD's Infinity Fabric, Apple's UltraFusion, ...)


This is a very old paradigm. We’ve been here before:

https://en.m.wikipedia.org/wiki/Intel_8087

The problem is memory bandwidth. This is also a place we’ve been to before :)


According to Jim Keller, not dead:

https://www.youtube.com/live/oIG9ztQw2Gc?feature=share

This isn’t the best recording on YouTube but it’s late and I couldn’t quickly find the other one.


Yeah, after listening to Keller speak about it on Lex's podcast, I'm convinced he probably knows more than about anybody. Another vote for not dead.


Why do you feel the need to appeal to authority here? Moore's law states that the number of transistors on a chip doubles every two years. Looking at the data[0] confirms, yes, Moore's law is still alive.

[0]: https://upload.wikimedia.org/wikipedia/commons/0/00/Moore%27...


It's true that Moore's original claim was about number of transistors and not about any actual metric of performance.

But "number of transistors" is like "number of lines of code": it's a cost, not a benefit, and if it feels otherwise it's only because that cost is the cost we have to pay for some benefit we care about.

And the claim that's increasingly commonly made these days isn't "transistor density has stopped improving" (though I think that's slowed down somewhat since Moore's time?) but more like "performance has stopped improving".

If we are putting more and more transistors on our chips (hence, larger die area, lower yields, more cost, more heat produced, more expensive cooling required) but not getting corresponding performance improvements in the tasks we actually value, then the thing everyone actually valued about Moore's law is dead, regardless of the status of the literal words of Moore's claim.


You're looking for Koomey's Law, which is a different concept than Moore's law.


Not quite.

Moore's law, strictly, is about the growth of transistor density. Koomey's law, strictly, is about the improvement in computation per unit energy.

Those are both interesting, but frequently people care about something different from either, which is something like "computation per second available in hardware of reasonable size, power consumption and cost". Call this "effective performance".

This can increase even if Moore fails (e.g., we find good ways to exploit parallelism, and build larger devices with more cores). It can fail to increase even if Moore holds (e.g., we can put more cores on a device of the same size, but we aren't good enough at exploiting parallelism so real performance doesn't improve).

It can increase even if Koomey fails (e.g., we find ways to make our hardware faster; there's a corresponding increase in power consumption but we are still able to cool things well enough so we just accept that). It can fail to increase even if Koomey holds (e.g., we can't make anything faster but we find a way to maintain existing speeds at lower power; very nice but no performance improvement unless power consumption is the current bottleneck).

It used to be that effective performance increased exponentially at a fairly consistent rate. This increase has slowed but not stopped; it's not obvious (to me, anyway) what we should expect it to do in the nearish future.

The consistent exponential increase in effective performance had a name, in popular discourse. It was called "Moore's law". It's unfortunate that strictly speaking it isn't what Moore was originally describing, leading to an ambiguity when people refer to "Moore's law" between a law about density and a law about effective performance.

(I unfortunately lack the ability to read minds, so I can't be sure what OP had in mind. But given the statement that "it has big implications for the advancement of AI technology", it looks to me more like effective performance than density.)


That's a nice graphic, thanks. I guess I was talking about the future, more than the right now. During the first Friedman interview, he went into detail about just how small things could go, and why we're a far way away from hitting that limit.


No. The Moore Law has been declared dead since at least 2010 and nothing changed in terms of increase of density of transistors in chips. Doubling every two year, just as stably as in the 90s.

https://ourworldindata.org/grapher/transistors-per-microproc...

(the stagnation of 2019-2020 has nothing to do with technology; it's COVID)



In its original form as in "transistors double every two years" it certainly already seems over. Apple silicon is at 5mn today, and Intel claims they'll conquer 3mn by 2030, so even these facts and theoretical statements already don't fit anymore.

We'll probably have a couple more innovations and might get to making a transistor out of a single atom (silicon atom is 0.262nm; carbon atom is 0.3).

5nm / (2*2*2*2) =~ 0.3

So I don't think we're done making faster hardware just yet, but we're certainly getting to the boundaries of what appears to be physically possible.


"5 nm" does not correspond to any physical dimension, so it is nonsensical to compare to size of atom.


“5 nm node is expected to have a contacted gate pitch of 51 nanometers and a tightest metal pitch of 30 nanometers” - wikipedia

Its a marketing term


A nanometer is a well defined physical dimension. If we split a milimeter in a million parts, each one is a nanometer


Sure, but (very unfortunately for clear communication) that's not what "5 nm" written by brancz in "Apple silicon is at 5 nm" means. "Apple silicon is at TSMC N5" would be more accurate.


Yes, it’s a real unit, but when used by silicon manufacturers it’s a marketing term that doesn’t correspond to any physical dimension.


It helps, but isn't a necessity that the size decreases to increase transistor count. You can get bigger and bigger dies / interconnected multi die assemblies too, which is happening if you look at the custom hardware Tesla and others have announced.


Really dumb question from someone who barely knows this stuff, but does quantum computing in theory reduce that size again if it takes off in the future?


No. Quantum computers do not and can not solve the same problems as classical computers, so even if quantum computers were smaller (right now they're not), they are not substitutes. This is notwithstanding the fact that current quantum computers barely work and require a huge support system even if the computation part might be small.


No.


Moore's Law has become a snowclone [1], the current iteration is:

"AI's Rule: Just as Moore's Law unfolds, the language models might expand, doubling the size and inference capability every [insert timeframe], revolutionizing communication and comprehension in unprecedented ways." (generated by ChatGPT)

[1] "a cliché and phrasal template that can be used and recognized in multiple variants", https://en.wikipedia.org/wiki/Snowclone


No, it is not over (yet). The node is nonsense. https://ieeexplore.ieee.org/document/9150552 DOI: 10.1109/MSPEC.2020.9150552

Edit: I have no idea why anyone would downvote a link to this article. It directly answers the question with a decent level of technical detail. We are nowhere near single atom sized features yet, despite what node names might lead you to believe. There's still quite a ways to go.


I didn't downvote, but maybe it was because the article can't be read without creating an account on the site?


The original Moore's law was about transistors per chip AT CONSTANT COST. That has ceased to be true as design and fabrication costs are increasing exponentially. However, the number of transistors per chip (regardless of cost) has still tracked Moore's law.

What is definitely over is Dennard scaling. As transistors got smaller it used to be possible to reduce the current used to drive them. That in turn made it possible to increase the clock frequency without frying the chip. Heat dissipation is proportional to the frequency and drive current. It's not possible anymore because you have leakage current and other parasitics (electrical noise essentially) that does not scale with transistor size. In the past you could take a 486 dx, overclock it from 25 to 50MHz and it would "magically" get about twice as fast. That is not possible anymore and chips are unlikely to ever run much faster than 5GHz.

However, Moore's law still provides performance because you can fit more cores, larger caches, specialized circuits, SIMD units, etc, on the same chip.


The graph in https://en.wikipedia.org/wiki/Moore's_law seems to be carrying on in the same direction.

People have been predicting its end for a long time.

Note also that it's about transistor cost, not about cpu performance - people sometimes think it is because performance used to be more correlated with transistor count.


I thought Moore's Law was dead long ago. I don't understand why some people still bring it up from time to time.

I remember reading in a magazine when I was a kid that Pentium 4 Extreme failed to reach 4.0 GHz in 2003 or 2004.

Since then, it took Intel quite some years to hit 4.0 GHz. Instead, the industry shifted to multi-core CPU, starting with the Core 2 series.

Does multi-core CPU count? I would say it's a bit of a stretch. It's more about horizontal scaling, where multi-CPU or even cluster also work in similar ways - there's no hard limit on how many CPUs you can add as long as you can cool them down. You can also make it much larger and sparse then put it in a large box to deal with the heating problem.

P.S. From the perspective of programming paradigm, people would then find "share nothing" and "message passing" is the way to harness concurrent and multi-core programming, after getting burned again and again with shared memory. These disciplines of not sharing RAM would further make multi-core more like programming on multi-CPU or clusters.


What you are talking about is Dennard scaling, which had transistors being able to go faster as they got smaller.

We still have Moore's law, which gives us more transistors. We just can't use them all at the same time and the individual transistors aren't getting faster (much).

For a while, we were able to use those extra transistors to wring out more performance out of sequential instruction streams by creating ever more complex out-of-order execution engines to figure out parallelism dynamically at run time. That also appears to have run its course.

Now we can use those extra transistors to add more cores, more cache and more specialised execution engines.


Well as you read the comment here, everyone seems to have a different definition of Moore's Law.

Let's measure transistor in a chip without caring about die size, so you can just use a larger die size measurement to keep the Moore's Law narrative alive. Well at some point that wont work because your maximum die size is still ~840mm2 due to reticle limit.

Then what? There is Chiplet, or what about you package all the die together using CoWos or EMIB? Yep. More transistor per "Chip" because the definition of Chip just changed for Die to multi die.

Or finally another media narrative, or Intel, AMD' PR or even how Jim Keller uses it. Any continuous improvement in transistor density per mm2 is consider as following Moore's law.

>So what's the consensus around here?

Generally speaking HN is very poor source of information on anything Hardware. I would use any consensus on HN as final say on the subject.


When Moore wrote in 1965, commercial use of MOS was 10 years in the future and Dennard scaling would not become widely understood and stirring interest in CMOS until 15 years in the future. So, he was actually observing an era much like now, with multiple chiplets inside the can and all sorts of random improvements that had an emergent trend. The Dennard era, which gave Moore its main impulse, was about 20 years long. Maybe 25 years if you include controlling tunnel leakage by introducing Hf-based dielectrics, and FinFETs since they sort of crinkle the surface of the chip to give you double the area, and otherwise obey classic Dennard laws of constant power per unit area.

But even during the Dennard era there were a bunch of big random innovations needed to keep things going. CMP allowing the number of metal routing layers to balloon, keeping distances under control. Damascene metals allowing much finer metals carrying heavier currents. Strained channels for higher frequency and better balance between P and N. Work-function-biased fully depleted transistors to avoid the stochastic problems with doping small channels. Etc.

So what really happened is not that Moore ended. We still have a wealth of random improvements (where "wealth" is the driving force) which contribute to an emergent Moore improvement. But the large change is Dennard ended, which gave us scaling at constant power. Although some of the random improvements do improve energy efficiency per unit of computation, they are not overall holding the line on power per cm2. At the end of the classic Dennard we were around 25W /cm2 but now we commonly have 50W in server chips, and there are schemes in the works to use liquid cooling up to hundreds of W / cm2.

Well, ok. But does that kill Moore? Not if it keeps getting cheaper per unit function. And by that I do not mean per transistor. But as long at that SOC in your phone keeps running faster radio, drawing better graphics, understanding your voice, generating 3D audio, etc., and is affordable by hundreds of millions of consumers, Moore remains undead.


It is dead in the sense that transistor density isn't doubling every 18/24 months. It is still increasing but not very much.

People are starting to move to getting performance improvements by increasing chip sizes and power budgets - part of the reason why GPUs are more expensive than they used to be.


It's over but Intel's marketing department likes to redefine it to mean whatever suits the message of the day. You can read the original paper's definition and do the math on transistor counts say typical desktop processors and arrive at something well under 50% year over year growth over the last decade.

"The complexity for minimum component costs has in creased at a rate of roughly a factor of two per year (see graph on next page). Certainly over the short term this rate can be expected to continue, if not to increase."

  1. https://www.intel.com/content/www/us/en/newsroom/resources/moores-law.html
  2. https://download.intel.com/newsroom/2023/manufacturing/moores-law-electronics.pdf


Yet another classic Cantrill talk nails this IMHO https://www.youtube.com/watch?v=MtrZJ4UqSn8


It gets modified to suit the current processor trends of the day with the exclamation it isn't dead. I don't think most bother about such silliness, TBH.


General purpose CPUs can only get so good. With that said, there are still many advancements that can be made to shrink die sizes and/or shove more transistors more closely together (3D is hard due to heat… but…).

I’m excited about photon-based processors, but until that’s a reality we still have a ton of headway for application-specific scaling.

If you rip specific loops out of a general purpose CPU, there are still plenty of gains to be made!


> General purpose CPUs can only get so good

People claimed this - “single core is over” - years ago when Intel was stuck doing Skylake refreshes and AMD’s Ryzen only matched the older Haswell CPUs.


To clarify, my point was about general-purpose CPUs, not single or multi-core.

A general-purpose CPU requires that a program is transformed (via compilation or interpretation) into a set of basic instructions, ones that the processor knows how to handle. This means that almost every program requires many cycles to complete, even if the underlying logic itself could theoretically be done within a single cycle (or within no cycle at all!).

On the other end of the spectrum are FPGAs and ASICs, programmable or dedicated circuits that allow you to create specialized logic that corresponds directly to a specific need.

Bringing this back to the discussion at hand: Moore's law cares nothing about general-purpose CPUS, and is just focused on number of transistors on an IC doubling. With that said, transistors can only get so small (due to the laws of physics), and so one can presume we'll see an end to scaling eventually.

There are changes we can make to improve general-purpose CPU architecture, regardless of transistor count, and there are changes we can make to how we run programs (moving dedicated logic to dedicated circuits). Forcing any logic into a generic set of steps that run a in a loop will always be less efficient than wiring up the logic itself.

The questions has always been whether to wait for the machine to get faster or to create the dedicated logic yourself. The former has been true since the beginning of computation, and has been closely associated with Moore's law. With that said, it doesn't mean that the literal end of Moore's law is the end of computational efficiency gains.



Moore's law coming to an end gives more oppurtunities to exploit new ways of using tech that was not done until now since transistor sizing provided bigger updates. There are things like "More Moore" and "More than Moore" for a long time.


It seems to me that at some point Moore's law became not an independent result of innovation, but rather a target for Intel et co to hit. And then it became increasingly more expensive to hit the target, and ultimately this target process collapsed.


It's also crucial to not forget about wirth's law, that states that software is getting slower than hardware is getting faster.

So don't fall for software vendors that want to convince you that you need a faster CPU every x year. You don't.


But if software is getting slower doesn't that apply you do need a faster cpu unless you are okay running outdated software?


Moore's law normalized to power and cost is dead. Ignoring power and cost it is continuing along because people want it more even if it is getting more expensive.


Claims of the death of Moore's Law have been greatly exaggerated.


[flagged]


first time im seeing spam on HN




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: