Hacker News new | past | comments | ask | show | jobs | submit login

I sincerely worry about a future when most people act in this same manner.

You have - for now - sufficient experience and understanding to be able to review the AI's code and decide if it was doing what you wanted it to. But what about when you've spent months just blindly accepting" what the AI tells you? Are you going to be familiar enough with the project anymore to catch its little mistakes? Or worse, what about the new generation of coders who are growing up with these tools, who NEVER had the expertise required to be able to evaluate AI-generated code, because they never had to learn it, never had to truly internalize it?

It's late, and I think if I try to write any more just now, I'm going to go well off the rails, but I've gone into depth on this topic recently, if you're interested: https://greaterdanorequalto.com/ai-code-generation-as-an-age...

In the article, I posit a less than glowing experience with coding tools than you've had, it sounds like, but I'm also envisioning a more complex use case, like when you need to get into the meat of some you-specific business logic it hasn't seen, not common code it's been exposed to thousands of times, because that's where it tends to fall apart the most, and in ways that are hard to detect and with serious consequences. If you haven't run into that yet, I'd be interested to know if you do some day. (And also to know if you don't, though, to be honest! Strong opinions, loosely held, and all that.)




If we keep at this LLM-does-all-out-hard-work for us, we’re going to end up with some kind of Warhammer 40k tech-priest-blessing-the-magic-machines level of understanding, where nobody actually understands anything, and we’re technologically stunted, but hey at least we don’t have the warp to contend with and some shareholders got rich at our expense.


Unless it's all a ploy by Tzeentch to prepare the ground for the coming of the Chaos Gods.


You and I seem to live in very different worlds. The one I live and work in is full of over confident devs that have no actual IT education and mostly just copy and modify what they find on the internet. The average level of IT people I see daily is down right shocking and I'm quite confident that OP's workflow might be better for these people in the long run.


It's going to be very funny in the next few years when Accenture et al charge the government billions for a simple Java crud website thing that's entirely GPT-generated, and it'll still take 3 years and not be functional. Ironically, it'll be of better quality then they'd deliver otherwise.

This is probably already happening.


GPT will be masters at make-believe. The project will last 15 years and cost a billion before the government finds that its a big bag of nothing.


> The one I live and work in is full of over confident devs that have no actual IT education and mostly just copy and modify what they find on the internet.

Too many get into the field solely due to promises of large paychecks, not due to the intellectual curiosity that drives real devs.


I actually do think this is a legitimate concern, but at the same time I feel like when higher-level languages were introduced people likely experienced a similar dilemma: you just let the compiler generate the code for you without actually knowing what you're running on the CPU?

Definitely something to tread carefully with, but it's also likely an inevitable aspect of progressing software development capabilities.


A compiler is deterministic. An LLM is not.


Place and routing compilers used in semiconductor design are not. Ironically, simulated annealing is the typical mechanism and is by any appropriate definition, imo, a type of AI.

Whatever you do in your life using devices that run software are proof that these tools are effective for continuing to scale complexity. Annoying to use also ;)


I take it you haven't seen the world of HTML cleaners [1]?

The concept of glueing together text until it has the correct appearance isn't new to software. The scale at which it's happening is certainly increasing but we already had plenty of problems from the existing system. Kansas certainly didn't develop their website [2] using an LLM.

IMO, the real problem with software is the lack of a warranty. It really shouldn't matter how the software is made just the qualities it has. But without a warranty it does matter because how its made affects the qualities it has and you want the software to actually work even if it's not promised to.

[1]: https://www.google.com/search?q=html+cleaner

[2]: https://www.npr.org/2021/10/14/1046124278/missouri-newspaper...


> I take it you haven't seen the world of HTML cleaners [1]?

Are you seriously comparing deterministic code formatters to nondeterministic LLMs? This isn't just a change of scale because it is qualitatively different.

> Kansas certainly didn't develop their website [2] using an LLM.

Just because the software industry has a problem with incompetence doesn't mean we should be reaching for a tool that regularly hallucinates nonsense.

> IMO, the real problem with software is the lack of a warranty.

You will never get a warranty from an LLM because it is inherently nondeterministic. This is actually a fantastic argument _not_ to use LLMs for anything important including generating program text for software.

> It really shouldn't matter how the software is made

It does matter regardless of warranty or the qualities of the software because programs ought to be written to be read by humans first and machines second if you care about maintaining them. Until we create a tool that actually understands things, we will have to grapple with the problem of maintaining software that is written and read by humans.


>But what about when you've spent months just blindly accepting" what the AI tells you?

Pour one out to the machine spirit and get your laptop a purity seal.


This seems a little silly to me. It was already possible for a script kiddie to kludge together something they didn’t understand —- copying code snippets from stack overflow, etc. And yet, developers continue to write finely crafted code that they understand at depth. Just because we’ve made this process easier for the script kiddies, doesn’t prevent experts from existing and the market from realizing these experts are necessary to a well run software business.


nothing prevents you from asking an LLM to explain a snippet of code. And then ask it to explain deeper. And then finally doing some quick googling to validate the answers seem correct.

Blindly accepting code used to happen all the time, people copy pasted from stack overflow.


Yes, but copy/paste from stack overflow was a meme that was discouraged. Now we've got people proudly proclaiming they haven't written a line of code in months because AI does everything for them.


>And then finally doing some quick googling to validate the answers seem correct.

There will come a time when there won't be anyone writing information to check against. It'll be AI all the way down. Or at least it will be difficult to discern what's AI or what isn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: