Hacker News new | past | comments | ask | show | jobs | submit login

> It's actually arrogant as hell to assume that we can achieve a higher level of energy efficiency than billions of years of evolution, particularly so early in the game.

You are right that LLMs are still far off from the performance of the human brain. Both in absolute terms, and also relative to the power used.

However, I don't see anything arrogant here. We have lots of machines that can do many tasks more energy efficient (and better) than humans. Both mechanical and intellectual tasks.




It's not arrogance to think you can create a tool that does one thing the brain does better than the brain for less power. It's arrogance to think that you can do everything the brain does for less power. Living organisms have been relentlessly honed for the ability to efficiently solve varied problems across ~10^40 experiments over the age of the earth. If some marginally intelligent monkeys think they can build an error corrected, digital system that encompasses all of that functionality while using less power, I'd say that's obviously arrogance, particularly if it hasn't been the subject of a civilizational drive for a few millennia already.


> Living organisms have been relentlessly honed for the ability to efficiently solve varied problems across ~10^40 experiments over the age of the earth.

Evolution has been optimising them for creating descendants, not general problem solving with minimum energy expenditure.

No one expects that LLMs can solve all problems: they can't. They can only predict text, nothing else. They can't fight off a virus infection or evade a lion. Specifically, LLMs can't reproduce at all either, yet alone efficiently. Reproduction is what evolution is all about.


Life is optimized for _SURVIVAL_ which means being able to navigate the environment, find and find/utilize resources and ensure that they continue to exist. Reproduction is just a strategy for that.

LLMs are human thinking emulators. They're absolutely garbage compared to "system 1" thinking in humans, which is massively more efficient. They're more comparable to "system 2" human thought, but even there I doubt they're close to humans except for cases where the task involves a lot of mundane, repetitive work - even for complex logic and problem solving tasks I'd be willing to bet that the average competitive mathematician is still an order of magnitude more efficient than a LLM SoTA at problems they could both solve.


> LLMs are human thinking emulators.

They aren't. They are text predictors. Some people think verbally, and you could perhaps plausibly make your statement about them. But for the people who eg think in terms of pictures (or touch or music or something abstract), that's different.

> They're absolutely garbage compared to "system 1" thinking in humans, which is massively more efficient. They're more comparable to "system 2" human thought, but even there I doubt they're close to humans except for cases where the task involves a lot of mundane, repetitive work - even for complex logic and problem solving tasks I'd be willing to bet that the average competitive mathematician is still an order of magnitude more efficient than a LLM SoTA at problems they could both solve.

LLMs are still in the infancy of where we will be soon. However for me the amazing thing isn't that they can do a bit of mathematical reasoning (badly), but that they can do almost anything (badly). Including reformulating your mathematical proof in the style of Chaucer or in Spanish etc.

As for solving math problems: LLMs have approximately read about any paper ever published, but are not very bright. They are like a very well read intern. If anyone has ever solved something like your problem before (and many problems have been), you have an ok chance that the LLM will be able to help you.

If your problem is new, or you are just getting unlucky, current LLM are unlikely to help you.

But if you are in the former case, the LLM is most likely gonna be more efficient than the mathematician, especially if you compare costs: companies can charge very little for each inference, and still cover the cost of electricity and amortise training expenses.

A month of OpenAI paid access costs you about 20 dollars or so? You'd have to be a pretty clueless mathematician if 20 dollars an hour was your best money making opportunity. 100+ dollars an hour are more common for mathematicians, as a eg actuaries or software engineers or quants. (Of course, mathematicians might not optimise for money, and might voluntarily go into low paying jobs like teaching, or just lazing about. But that's irrelevant for the comparison of opportunity costs.)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: