Hacker News new | past | comments | ask | show | jobs | submit login

Three months ago in the Copilot thread I was saying

> in 5 years will there be an AI that's better than 90% of unassisted working programmers at solving new leetcode-type coding interview questions posed in natural language?

and getting pooh-poohed. https://news.ycombinator.com/item?id=29020401 (And writing that, I felt nervous that it might not be aggressive enough.)

There's this general bias in discussions of AI these days, that people forget that the advance they're pooh-poohing was dismissed in the same way as probably way off in the indefinite future, surprisingly recently.




The issue is these techniques are growing in capabilities exponentially, while we have a habit of extrapolating linearly. Some saw the glaring deficits in copilot then reasoned that linear improvements is still glaring deficits. I don't know that this bias can ever be corrected. A large number of intelligent people simply will never be convinced general AI is coming soon no matter what evidence is presented.


> techniques are growing in capabilities exponentially, while we have a habit of extrapolating linearly

What does this even mean? How do you put a number on AI capability? You can say it is growing faster than people expect, but what is even exponential or linear growth in AI capability?


I take your point that the linear/exponential terminology is a bit dubious. But the simple way to make sense of it is just going by various benchmarks. E.g. the power-law relationship between the model accuracy and the model size: https://eliaszwang.com/paper-reviews/scaling-laws-neural-lm/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: