Hacker News new | past | comments | ask | show | jobs | submit login

One of the lessons that one learns as a programmer is to be able to write code that one can later read back and understand. This includes code written by others as well as code written by oneself.

When it comes to production quality code that should capture complex and/or business-critical functionality, you do want an experienced person to have architected the solution and to have written the code and for that code to have been tested and reviewed.

The risk right now is of many IT companies trying to win bids by throwing inexperienced devs at complex problems and committing to lower prices and timelines by procuring a USD 20 per month Github Co-Pilot subscription.

You individually may enjoy being able to put together solutions as a non-programmer. Good for you. I myself recently used ChatGPT to understand how to write a web-app using Rust and I was able to get things working with some trial and error so I understand your feeling of liberation and of accomplishment.

Many of us on this discussion thread work in teams and on projects where the code is written for professional reasons and for business outcomes. The discussion is therefore focused on the reliability of and the readability of AI-assisted coding.




hmm. I ended up with a few 750+ line chunks of js, beyond the ability of chatgpt to parse back at me. So my go-to technique now is to break it into smaller chunks and make them files in a folder structure, rather than existing inside a single *.js So readability is an issue for me- even more so because I rely on ChatGPT to parse it back- sometimes I understand the code, but usually I need the llm to confirm my understanding. I'm not sure if this scales for teams. My work has sourcegraph, which should assist with codebases. So far it hasn't been particularly useful- I can use it to find specific vulnerable libraries, keys in code etc, but that is just search.

What I really need is things like "show me the complete chain of code for this particular user activity in the app and highlight tokens used in authentication" ... - something senior engineers struggle to pull from our hundreds of services and huge pile of code. And so far sourcegraph and lightstep are incapable of doing that job. Maybe with better RAG or infinite context length or some other improvement there will be that tool. But currently the combined output of 1000's of engineers over years almost un-navigable. Some of that code might be crisp, some of it is definitely of llm-like quality (in a bad way)- I know this because I hear people's explanation of said code and how they misremembered it's function during post mortems. Folks copy and pasting outdated example code from the wiki etc. ie making things they don't understand. I presume that used to happen from stackoverflow too. Engineers moving to llm won't make too much difference IMO.

I agree, your points are valid, but I see "prompt engineering" as democratization of the ability to code. Previously this was all out of reach for me, behind a wall of memorization of language and syntax that I touched in the Pascal era and never crossed. 12 hours to build my first node.js app that did something in exactly the way I had wanted for 30 years. (including installing git and vscode on windows- see, now I am truly one to be reviled)


Exactly. Your comment summarizes the situation perfectly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: