Hacker News new | past | comments | ask | show | jobs | submit login

Humans have repeatedly built things that are beyond their own physical and intellectual capabilities. A calculator can do math problems much more quickly than any human being.



A calculator can do arithmetic faster than a human being. How well would a calculator do at proving Fermat’s Last Theorem?


Not well! But that's in no way relevant to the point, which is that we are demonstrably capable of creating machines that can perform tasks of intelligence that we cannot.


Sure, though that really began with the abacus. A skilled abacus user can perform calculations faster than most people can with a calculator. They practice until it's all muscle memory. I think this demonstrates that there's actually very little intelligence involved in arithmetic.


Sure, but how would the average guy fare at the same task?


Is GPT 4 the "average AI"?


We've yet to build a single machine that is intellectually capable beyond our own understanding.


>>> We're holding AIs to a much higher standard than ourselves. And we move the goal posts all the time as well. / Let's deconstruct that title. "AI isn't good enough". Good enough for what? Great example of moving the goal posts. Because anytime it fails to do whatever, it's not good enough. But what about all the stuff it is good enough for that people are actually using it for already? It's passing tests at a level most humans only manage very briefly after lots of intensive preparation. The knowledge slips away quickly after that. / The way I see it, there's a long, rapidly growing list of stuff that AIs have nailed already and a list of things where it is clearly struggling. That list is shrinking.

>> Humans have repeatedly built things that are beyond their own physical and intellectual capabilities. A calculator can do math problems much more quickly than any human being.

> We've yet to build a single machine that is intellectually capable beyond our own understanding.

Can you unpack this, please? I'll give some examples for you to respond to:

A. Any of our market-making mechanisms (NASDAQ, predictive markets, etc) synthesize information in ways that is faster and broader than any one human could understand. Humans understand the mechanism at work, but cannot really grasp all the information in motion.

B. Weather prediction. While humans understand the mechanics of satellite imagery and simulation, the combined effect of predicting the weather is superhuman.

C. How are large language models (LLMs) as capable as the are? In many cases, we don't seem to know. This isn't really new; last I studied it, the field's conceptual understanding of neural network capacity and architecture has a long way to go. In any case, LLM performance is often superhuman.

Are you saying that enough people, suitably arranged, could do the above tasks? Perhaps they could, but I doubt they could do the tasks reliably and efficiently. We aren't that kind of machine. :)

If you are saying that the fundamentals of intelligence are determined by the structure rather than the substrate, I agree, but I don't think this is salient.

You refer to a machine that is "intellectually capable beyond our own understanding." Above, I've asked you to define what you mean. But perhaps more importantly, why is your threshold important? We already know that machines of many kinds, including sub-human level, are useful. They don't have to exceed our understanding to be useful or dangerous.

My point is this: we've already built machines we can't practically comprehend. We seem to only be able to audit them by brute force. Alignment seems computationally beyond reach.

Your thoughts?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: