Hacker News new | past | comments | ask | show | jobs | submit login

Could you elaborate on why that implication might be true?

I can think of a counter-point:

Humans can deal with 'error-cases' pretty readily. The effort of humans to get those last few 9's might be a linear effort. For example, one extra checklist might be the thing that was needed, or simply adding a whole extra human to avoid a problem. OTOH, computers to get that last 0.001% correct might need magnitudes more effort to get right. The effort of humans vs computers does not scale at the same rate. Why therefore should we think that something that humans can do with 2x effort would not require 2000x better AI?

There are certainly cases where the inverse would be true. Where human effort to get better reliability would be more than what is needed for a computer (monitoring and control systems are good examples; eg: radar operators, nuclear power plants). Though, in cases where computer effort scales better than human effort - it's very likely those efforts have been automated already. That high level of reliability in aviation is likely thanks in part due to automation of tasks that computers are good at.




> 2000x better AI?

Even if it does, that puts AI ahead in what, 22 years with 2x improvements every 2 years? The simple problem with us humans, we haven't notably improved in inteligence for the last 100.000 years and we'll be beat eventually. It's not even a question barring some end of the world event, we already know it's completely possible because we are living proof that 20 W can be at least this smart.

And it's really just that the upfront R&D is expensive, silicon is cheap. Humans are ultra expensive all round constantly, so the longer you use the result, the more that initial seemingly ludicrous cost amortizes to near zero.


If I can paraphrase, you believe that humans are "intelligence level" 100, and AI is essentially somewhere at like 10 or 20, and is doubling every 2 years.

First, is AI actually improving 2x every 2 years? Can you link to some studies that show the benchmarks? AFAIK OpenAI with ChatGPT was something of an 8 or 10 year project. It being released seemingly "out of nowhere" really biases the perception that things are happening faster than they actually are.

Second, is human intelligence and AI even in the same domain? How many dimensions of intelligence are there? Out of those dimensions, which ones have AI at a zero so far? Which of those dimensions are even entirely impossible for AI? If the answer is "AI" can be just as smart as humans in every way, and we still don't even understand that much about human intelligence and cognition, let alone that of other animals... I'm skeptical the answer is yes. (Thinking about sciences view of actual intelligence, animals, and for a long time the thought was animals are biological automotons, I think shows that we don't even understand intelligence, let alone how to build AGI).

Next, even if the intelligence raise is single dimension and is actually the same sport & playing field, what is to say that the exponential growth you describe will be consistent? Could it not be the case that the 1000x to 1001x improvement might be just as hard as all of the first 1000x improvement? What is to say the complexity increase is not also exponential, or even a combinatoric growth?

> 20 W can be at least this smart.

20 W? I'm not familiar with it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: