Hacker News new | past | comments | ask | show | jobs | submit login

On his recent Lex Fridman podcast appearance, Jim Keller speaks to exactly this mindset. He says that they've been heralding the death of Moore's law since he started and that the "one trick ponies" just keep coming. He says he doesn't doubt that they will continue.



> they've been heralding the death of Moore's law since he started and that the "one trick ponies" just keep coming. He says he doesn't doubt that they will continue.

The situation is clearly far worse than what you suggest. Back in the 1990s and early 2000s, apparent computer performance was doubling roughly every two years. Your shiny new desktop was obsolete in 24 months.

Today, we're lucky to get a 15% gain in two years. The "one-trick ponies" help narrow the "apparent performance" gap, but by definition, are implemented out of desperation. They aren't enough to keep Moore's law alive (it's already dead), and their very existence is evidence of the death of Moore's law.


Moore's law is only about the number of transistors per chip doubling every 24 months, not about the performance. Seeing that the trend is still happening, Moore's law is not dead, as so many have claimed.


But what is it good for, if it does not improve performance? For example, increasingly larger and larger part of transistors on a chip is unused at a given time, due to cooling issues.


It makes other workloads more economical.

And as long as there's something to gain from going smaller/denser/bigger, and as long as the cost-benefit is good, we'll have bigger chips with smaller and denser features.

Sure, cooling is a problem, but it's not like we're even seriously trying. It's still just air cooled. Maybe we'll integrate microfluidic heat-pump cooling into chips too.

And it seems there's a clear need for more and more computing. The "cloud" is growing at an enormous rate. Eventually it might make sense to make a datacenter oriented well integrated system.


It obviously does improve performance, otherwise why would people be buying newer chips? :) It doesn't mean we'll see exponential performance increases though. In specialized scenarios, like video encoding and machine learning, we do see large jumps in performance.


I thought it was about number of transistors per chip at optimal cost level per transistor.


On the contrary, I think this is fantastic news:

- It means consumers won't have to keep buying new electronic crap every couple years. Maybe we can finally get hardware that's built to be modular and maintainable.

- It means performance gains will have to come from writing better software. Devs (and more importantly, the companies that pay devs) will be forced to care about efficiency again. Maybe we can kill Electron and the monstrosity of multi-MBs of garbage JS on every site.

The sooner we bury Moore's Law and the myth of "just throw more hardware at it" the better.


If you don't just look at the failing Intel, AMD has been doing 15-20% improvements year on year.


>Today, we're lucky to get a 15% gain in two years.

The 2012 MacBook Pro 15-inch I'm typing this on is about 700 on Geekbench single-core, while the 2019 16-inch is about 950. 35% "improvement" in seven years!

M1 13-inch is 1700 on single-core, which is why I hope to upgrade once the 16-inch Apple Silicon version comes out.

>The "one-trick ponies" help narrow the "apparent performance" gap, but by definition, are implemented out of desperation.

I don't think that's right. x86 hit an apparent performance barrier in the early 2000s, with the best available CPUs being Intel Pentium 4 and AMD Thunderbird, both horribly inefficient for the performance gains they eked out; those were very much one-trick ponies created from desperation. It took a skunkworks project by Intel Israel, which miraculously turned Pentium III into Core microarchitecture, to get out of the morass. Another meaningful leap occurred when going from Core Duo to Core i, but the PC industry has been stuck with Core i for almost a decade.

We've finally smashed past this with Apple Silicon, but it is certainly not a one-trick pony; Apple could sell it to the world tomorrow and have a line of customers going out the door, just like it could have sold the A-series mobile processors to rivals. AMD Ryzen isn't quite the breakthrough Apple Silicon is, but it is good enough for those who need x86.


Apple's M1 is a good processor, but the only reason it "smashed past" previous macbook single core results is apple was using older Intel lower powered processors.

It is not twice as fast as even mobile x86 stuff, as much as people seem to want to think otherwise.


Anecdata of one, but compiling our product at work on my three machines (a 2019 intel macbook pro, a 2020 10 core intel imac and an m1 mac mini), the macbook pro is the slowest, but the imac isn’t that much faster than the mini. it’s something like:

- macbook pro: 9 minutes

- mini: 5 minutes

- imac: 4 minutes


Where the M1 really blows any other CPU away is single-threaded performance; multi-threaded performance is just normal. So it's not surprising that it's not faster than your 10-core iMac when compiling (which I assume is using 100% of every core).

In fact, given that the M1 is an 8-core CPU and your iMac has a 10-core CPU, the fact that they take 5 and 4 minutes respectively to compile seems to indicate that they're fairly similar in multi-threaded performance (and the iMac wins only because it has more cores).



Is this a bad thing? This seems like a great outcome for consumers, and will reduce e-waste. I look forward to a future where people see less need to upgrade year after year.


Which is why Apple is moving revenue streams also to services.


This is false, computer performance has been doubling nearly every year. See for example https://www.top500.org/statistics/perfdevel/


How is this calculated? It isn't very clear. Is this representative of individual devices or is it caused by more of the same devices?


Even then Jim Keller is using a looser definition of Moore's law - i.e. he's saying there's a lot of scaling left rather than that the scaling will continue as it did in the past.


"Moore's law" was strictly about the average cost of a transistor, not performance in general.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: