Very early on I took pains to figure out what toolchains gave me traction and which tools produced waste complexity.
I think a lot of the excitement about using LLMs to code is because a lot of teams are stuck in local optima where they need to use noisy tools, and there's a lot of de-noised output available to train LLMs.
This is progress in searching and mitigating bad trade-offs, not in advancing the state of the art.
I think a lot of the excitement about using LLMs to code is because a lot of teams are stuck in local optima where they need to use noisy tools, and there's a lot of de-noised output available to train LLMs.
This is progress in searching and mitigating bad trade-offs, not in advancing the state of the art.