Hacker News new | past | comments | ask | show | jobs | submit login

I get that AI is overhyped, but calling it “trash” seems silly and hyperbolic. LLMs can do some pretty neat things if you understand their limitations.



Fair enough, but in my experience when you run into anything outside the happy path it will become an exercise in extreme torment.

Maybe it’s the language I was using to debug an issue (typescript and having click/drag events override each other), but chatgpt 4o (along with the previous versions) would get into this circular path of offering suggestions (would cycle the same three solutions).

It was also interesting to note that when this circular debug path started it would start to remove random properties or objects or types from the code. It wouldn’t be noticeable reading it but if you have any modern editor you’ll see the issues with your lsp/linter. The removals had nothing to do with solutions proposed.

This issue seemed to always happen to me, I think I have to relegate these LLMs to act like advanced scaffolding tools where I can include detailed instructions for basic capabilities as well (rather than writing them saving a decent amount of hours all things considered).

I don’t know if other models are actually good at debugging (going to guess no because they don’t seem to actually understand context of the problem, just the relations of keywords when suggesting solutions).

I agree with the other poster, maybe trash is a harsh word but it is extremely bad for debugging anything advanced it seems.


It's a bit weird; I've definitely observed the circular logic when using it for debugging, but occasionally when I've seen that and called it out for that saying "You've already suggested A, B, and C, there's no value in suggesting them again", it actually will come up with a unique thing that actually solves the problem. I guess by eliminating the most common problems it has to start looking for more obscure stuff?

I've become a bit disillusioned by the idea of it being any good for direct code generation in its current iterations; nearly everything its generated has required pretty substantial fixes on my end, to a point where I'm not sure it's actually saving me time. The thing I mostly use ChatGPT for now is for parsing and digesting server logs.


> I think I have to relegate these LLMs to act like advanced scaffolding tools

This is a reasonable take. When it was first released it was odd watching the entire internet treat chatGPT as some kind of oracle

It’s really amazing at generating content, but it doesn’t actually think or know anything, despite how well it can keep up its side of a conversation . If you have domain expertise in what it is generating, the limitations are very clear.

My concern is for the next generation of students coming up with pervasive AIs. Will they learn how to write critically if they rely on an LLM? Will LLM quality go to shit because it’s just being trained on an internet full of dodgy LLM output?


I disagree. I believe that AI is fundamentally destructive to society because it concentrates wealth into the hands of tech companies, removes jobs at a faster rate than previous automations, encourages human isolation by making people less reliant on each other, and produces fake "art" that floods the market and devalues human expression. I believe it is the prime example if evil incarnate.


The prime example of evil incarnate? Not, say, death camps and genocide?

I think it's generally an interesting technology with a lot of uncontroversially beneficial applications - things like voice transcription, drug discovery/response prediction, defect detection, language translation, tumor segmentation, weather forecasting/early warning systems, EEG decoding, malware detection, or OCR. No longer having to memorise FFMPEG commands also doesn't seem that evil to me.

Automation of tasks is something we all already benefit from constantly, like to keep food fresh without having someone collect ice from mountains, but I do agree with the concern that under capitalism it tends to lead to concentration of wealth. I think the productive path is along the lines of UBI or broader economic changes, allowing everyone to capture the utility, not through rejecting the technology itself and definitely not a hatemob against an open source developer from deciding to take a handy opt-in tool as a stand-in for evil itself.


What is evil is that AI reinforces technological development which in turn is already responsible for genocide against non-human animals, which in my opinion is on the same level as human genocide.


The fact that we're all okay with software that frequently gets things very very very VERY wrong is extremely worrisome. AI in its current state is a high-speed misinformation machine.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: