Hacker News new | past | comments | ask | show | jobs | submit login

I think the problem is that they aren’t mind-blowing, but an improvement.

I resent some LLM implementations on principle, but decided to give these code helpers a try. What I found was they’re reasonably bad, and I kept telling them the solution doesn’t work, only to be presented with a little tweak.

So I don’t see the point of outsourcing my thinking, I’d rather remain intelligent and do the search/try/tweak on my own, instead of pretending a half-assed LLM is genius.

That doesn’t mean they don’t have good use cases, or aren’t an improvement on previous tech. But we definitely should stop calling them mind-blowing. Jaron Lanier had long ago predicted we’d willingly downplay human intelligence to pretend AI was… I.




> I think the problem is that they aren’t mind-blowing, but an improvement.

You seem to be doing what GP is pointing out.

GP's claim is that the relative improvement itself is mind-blowing, not that the tech is mind-blowing in an absolute sense.

I tend to agree: much of the detraction hangs on current-state rather than a probable potential-state informed by recent relative advancements.

In other words, many proponents are chuffed because of that potential; not because actuality. Likewise, skeptics are reserved because of the actuality, and not because of potential, for whatever reason (there are at least a few main ones, I think).


See, I'm very skeptical that we can guess at the potential, because so much of it depends on research and figuring out better ways to do things going forward, and successful research programs are very unpredictable.

I feel like people are taking Moore's Law, which is definitely a real thing, and thinking that everything else is going to advance like semiconductors did, and I just am not seeing it in any other field. It's not true in software development (where gains, such as they are, are more linear than exponential) its definitely not true in rockets or civil engineering or steel or anything like that. So I am afraid a whole lot of people are expecting Moore's Law type improvements in AI, when really AI advances more like punctuated equilibrium: a sudden dramatic improvement, then a long period of consolidation and stasis, then another sudden dramatic improvement, often in a totally different unpredictable area.

But I've just been keeping tabs on AI since the hot way to do it was Expert Systems back in the 1990's, and I'm aware of its history since Norbert Weiner wrote Cybernetics back in 1947, and this seems to be a repeating pattern: a single major breakthrough (in this case, honestly, the combination of large quantities of data with NN's- with driving and natural language being the two easiest to get, and so the most prominent examples) followed by a lengthy fallow period where not much appreciable progress happens, then another breakthrough, often orthogonal to where earlier breakthroughs happened.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: