Our experience coding from scratch will have about as much validity as the experience of people who coded in assembly has today. It'll be a specialized skill with its own niche but without much value in the broader tech industry.
The experience I got from coding assembly back in the day is one I use regularly today. It means I'm acutely aware of how various abstractions are implemented, and comfortable looking at assembly in a debugger to spot performance issues. The same will keep being true: Even if you're working in higher level layers (I'm mostly writing Ruby and Javascript these days), understanding the abstractions lets you make better choices.
> but without much value in the broader tech industry.
Not talking about AI; but if you want to work on compilers, linux kernel, game engine optimization, GPU optimized code and many many other things than web development you need to be more than familiar with how to work with machine instructions, albeit not working with it every single day. These are significant parts of the industry which make the rest possible.
I'm not sure how you quantify "very small" here? I'm running low-level conferences [0] and they're big enough for me to be a full-time organizer for them: with a job fair that's included Mozilla, real estate, nuclear defense and various game studios.
We could check popular job boards (or HN's Who's Hiring threads) and see how many postings mention compilers, kernels, assembly, GPU, etc. Then compare that number to the total.
Did the number of jobs of people coding in assembly go down, or is it that the number of jobs of people not coding in assembly go up? (I truly don't know. Just curious)
I tend to believe that the number of people working with machine instructions might have even increased! We have much more of the tech that I mentioned nowadays compared to 90s. Also a lot of other industries have entered digital age and in many of them, i.e aviation, transportation, financial, etc. knowing the intricacies of your hardware is crucial.
The thing is that web development has seen such a boom which has seemingly dwarfed everything else in its wake.
I describe it as the difference between "software development" and "software engineering". Or between "business" and "infrastructure". Both are valuable, but have different demands on understanding, rigor etc.
If that was all they were doing, sure.. but if they wrote language parsers, understood computing from the nand-gate up and worked their way up and down the stack over-and-over.. these types of guys will blow past all of us. The understanding you get from going up and down stacks is ridiculous.
I mean I've worked up and down most of the stack. The understanding is great. It still doesn't give me much of a competitive advantage over the average "full-stack" (laughable term given how narrow a part of the stack it encompasses) engineer when we're all working at FAANG / generic SaaS startup churning out agile code commits for generic CRUD web app.
In this AI age I'm seeing it coming to fruition. Fullstacks have (IMO) more potential now than narrow specialists because they can now get away with high-level remarks and letting the AI fill in the details.
Yeah I’d say more specifically that general first principles understanding of different areas and the ability to think about systems / architecture / general logic is very useful.
Unlike traditional polymaths, detailed knowledge / familiarity with APIs and libraries is less needed.
Exactly, this sounds like people that think knowing assembly or managing memory manually with C gives them an advantage. I learned those things too and am not a single bit ahead of a developer that learns newer languages and never does a malloc in their lives.
Perhaps the old fashioned programming skills will be applied to writing test cases for the AI generated code.
There are still areas in software development where (not systemized) failure is not an option and where AI generated code absolutely can be useful, but would need careful validation from a human. Not only for unintentional errors, but also for potentially maliciously seeded errors in the model that is being used.
Perhaps manually writing test cases for the AI generated code could meet the requirements for human validation.