Hacker News new | past | comments | ask | show | jobs | submit login

Being efficient at writing code can barely make someone a 5x engineer. A team of people who write code effectively is easy to do.

Hyperproductivity in software is all about deciding what problems to tackle. Richard Hipp isn't worried about chipotle restaurant orders in golang, he's worried about how to store data reliably. That isn't a coding puzzle that ChatGTP is likely to help with. Either ChatGTP can do the whole thing itself (not yet the case) or it is a minor productivity boost because the hard part is articulating the problem.

Writing code quickly really isn't a challenge that high performing software engineers need to tackle. ChatGTP is a cool tool, we're all going to be using things like it in a few years, it'll change everything. But it won't make any old engineer a rival to the big names in software engineering.




I briefly worked with a software engineer who was making about a million dollars a year.

I mentioned that an approach would be a lot of work. His response:

“It’s just typing. Typey typey type (makes fingers on keyboard gesture).” Absolute lightbulb moment for me.

To become a senior engineer is to realize that the code writing part is inconsequential as long as there are no technical unknowns in the work.

It’s freed me to try out huge sweeping changes just for fun and generally detach from my code. Sometimes I delete large, intricate changes so quickly and ruthlessly that my colleagues have to ask me to reopen them.

We aren’t paid to write working code. We are paid to grow harmonious systems. Working code is table stakes.


Yeah, I'm really wondering if I'm dumb or something with this whole AI train, but my job as a developer that maintains existing (sometimes quite old) systems is much more about determining what actually needs to be done, gathering lost / not lost knowledge about the existing state of things in systems and figuring out where to change the code for maximum impact and minimal side effects. Lots of talking to customers/non-technical people as well to figure out if existing features can cover something they need and they just don't know about it and I happen to know about it etc.

I've been watching the AI hype (again) and it's completely going over my head, I really really don't get it - I doubt {x}GPT would be able to analyze hundreds of thousands of project code across a big system (multiple languages, multiple services, interconnectedness of everything) and tell me what to do, even in the case I would want to paste in everything proprietary to a 3rd party service which I really don't.

Maybe it just works for a subset of programming, but then again I don't really see how it differs to reading docs for generating boilerplate or whatever it's useful for in greenfield projects. I was also not really impressed by it generating improvements to code when you ask it the right questions since that's the hard part of the job, not typing out the code.

Maybe I'm just a dinosaur and dumb.


Don't downplay yourself like that. A 25k token GPT model is a step in that direction but we're 3 - 15 years from your vision of valuable.

Currently, the Language Models (GPT or GithubCopilot) are mostly good as ... copilots. You bounce off ideas off them or use them to kickstart something you want to do. It's great for junior devs (me!) but it still cannot do big context or cognition (what you describe), that part will be done by humans for quite a while.


I often suffer from impersonator syndrome, my freelance rates should be North if 70 per hour but I keep getting stuck at 40...

I grew up poor though and teaching myself coding moved be to earn more then I thought I'd ever earn without a college degree.

I knew about for loops almost a decade before I decided web dev would be my career, not just my hobby.

it's insane that you could get hired and not know how they work.

linked lists, sure. I still don't fully understand those lol, I was going for an associate's and that was in my c++ class but I've mostly only touched PHP, python, ruby, JavaScript... playing around with go and rust some as well, but I've never had to really think about linked lists, loops on the other hand is a daily thing.

hell, not knowing loops is like not knowing how to increment, like seriously, his to add 1 to an int.

Smh....


Put your resume up on job sites!

Even not-so-great-sounding contracts pay 50-65/hr, might be a confidence boost. Level up your skills on udemy, if necessary (wait for courses to be 80% off or whatever, happens regularly).

Could also go corporate at a non-tech for low stress and a decent salary.


I'm actually thinking of pivoting from web dev (10 years a laravel dev) to prompt engineering, and entry-level ai stuff, I'm not going to perfect the next big algorithm, but I can use langchain to create custom KB enhanced chatGPT instances, or automation agents for small-to-mid size companies.

I'm super obsessed lately w/ AI, on youtube, github, HN, reddit, etc... and working on actually building a clone of myself to do the mundane parts I don't like to do... hehe... this is kind of a green field, and the opportunities are huge, might be the last dev jobs when AGI hits -- training custom AGI's...


Hit me up, I’ll be your first client for prompt eng!


I have to ask: how do you work as an eng who pulls in $1m/yr?


(throwaway I guess)

I make over $1m as a SWE. “Senior Staff” or “Level 7” role at a big tech company everyone knows.

I mostly write code! I am way faster than most others here, and I have encouraged and mentored the people on my team to be similarly efficient.

I also help decide technical direction but it’s far from the “architecture astronaut” stereotype that writes no code.

It’s an infra team so no significant influence on product direction, other than considering our frameworks and tools products. I do consider them in that way, but it’s still a big difference from being involved in the direction of the external product.


One a scale of 1 to 10, how much did good timing with regard to equity play into your overall comp?


Probably -5 actually, I got promoted to this level right around when tech stocks started to dip. However, I’ve since had some retention bonus grants that will make it even more absurd if the stocks do go back up.

Timing gets a 10/10 for my $750k year at staff level though.


Congrats on your career success! I'm surprised infra can pay that well; I suppose at some companies they understand infra basically ensures the ship keeps running as it should.


At large companies the payscale for different types of engineers is usually the same or very close.


> mentored the people on my team to be similarly efficient

What were the main changes you introduced/points you mentored them on to get their productivity to also be similarly efficient?


A lot of it is pretty related to company internals and is tooling specific. At a large company there is a lot of infra and tooling that can break in all kinds of fun ways, quality is very mixed, so helping the team become experts in this is very useful for productivity and unblocking oneself.

I see people on other teams that seem to be stuck for days sometimes, on a really basic error that they haven’t really tried to debug at all. Seriously, just read the error message or dig into the code!

My team also has a strong culture of just pushing code changes. This is including tons of deletions and simplifications, it’s not just writing a million lines of code.

It’s also a very senior team, so I can’t take credit for everyone’s growth from junior engineers. Another part of it is creating an environment that productive senior coders want to join, so that they can avoid process-heavy teams and get stuff done.

There is an extreme level of trust on the team because everyone is senior and frankly extremely good at their job, which allows us to be even more productive.


Is that TC or salary?


TC. It’s like 70-80% stock. Depends on price of course. I sell immediately and put it in index funds.


And do you work a lot?


High level IC at big tech can get you around that neighborhood


Emphasis on "high level".

An org of a few hundred people at a big tech company many only have a couple of Individual Contributors making this much money.


What does IC mean? Doesn't seem to be very googleable


An individual contributor, as opposed to a manager.


IC means integrated circuit.



IC=Individual contributor (to answer sibling)


At semiconductor companies we have plenty of IC's doing IC design. The initialism has multiple definitions.


For the guy that asked, IC means integrated circuit. No fucking way they would give that money to a meatsack.


In this context it means "individual contribtor", and yes, some people are making that much money at Facebook, Google, Amazon and others.


IC being internal consultant?


There's a couple where I work. They are highly specialized in a very specific area of CS, with decades of experience, and with a vast working knowledge of the specific internal framework that they've helped build over the course of ~15-20 years.


An L7 IC at Facebook or Google can do this.

I know it's unbelievable for those not in one of these companies, but this data is accurate: https://www.levels.fyi


> L7 IC at Facebook or Google can do this.

but they are extremely rare


Not that extreme, maybe 1% of engineers?


Multiple contracts or multiple full time positions (i.e. overemployed)


>> We aren’t paid to write working code. We are paid to grow harmonious systems. Working code is table stakes.

With all due respect, you may have learned the wrong lesson — or missed some important context — and the above is a false dichotomy. Working code isn't just table stakes; it is virtually impossible to "grow harmonious systems" if you don't have solid working code, because without solid working code you cannot grow the system reliably.

The other thing to note is that just because someone makes a lot of money doesn't necessarily mean that they are good at their job, or that they are a good team player, or that the way they operate and the context they operate in applies to you. I know consultants who pull in a million or more every year, and you know how they do it? They write absolute shit software that, while fulfilling the contract requirements, ends up hamstringing the client down the line. Sometimes prioritizing short-term gains might be acceptable or even necessary (e.g. the auditor asks you to compile data from a bunch of sources and you need to do it fast), but for projects with longer life spans one should definitely not blindly "typey typey type".


Table stakes is a poker term meaning you can’t even play the game without it. There’s no dichotomy here.

I’d go further and say that in big tech, well factored well tested code is table stakes and the ability to produce it quickly is assumed. Junior engineers have to prove that to get their first promotion.

If that’s all they show, it will be their last promotion, and that’s perfectly fine too.


I think this is very accurate to my experience in big tech. As a person who's only gotten that "first promotion" so far, I'd be interested to hear your summaries of what attributes / skills are looked for at successive levels: e.g. what engineers need to prove to get promoted to Senior, Staff, Senior Staff, etc.


Seplly spell spell too.


Oh yeah I know somebody who makes 2 million and disagrees. And he's twice as correct as your guy.


I'm aware that I'm in the minority here, but I find that investing heavily in being able to type fast and handle the tools well actually changes what code I write and not just how quickly.

The faster I've gotten at moving code around, the more subconsciously willing I've been to try something with a modest probability of working out, and more importantly the easier it's gotten to throw away a draft that isn't working out and try again.

LLMs haven't made it into my coding workflow yet, but I can see how they could be useful and the trend seems to be that they'll be in most high-octane workflows at some point.


I think this is true but I also found limiting returns on going faster and at some point your just confident enough to try out if it is a good idea. It also becomes easier to justify, you become more trusted and you evaluate better.

Maybe that makes everyone hit that point easier, maybe it brings that point in for you. Not sure.


I used to attend cross-team meetings with a guy (works for AWS now) who was a tech lead and meeting facilitator. He gathered the agenda, presented during the meeting, assembled notes during the meeting, and distributed the notes via email at meeting conclusion. He keyboarded the entire time, looking up at the projected display. Fastest touch typist I recall seeing.


Yes, but being the faster foot shooter in the room doesn't cut it either.

Smartness isn't wisdom.

AIs are, at most, producers of synthetic smartness.


a good programmer is almost certainly a fast typer

of course a fast typer does necessarily make a good programmer


Yes and no. Most of my stuff is automated, auto completion and micro code gen. My thinking is the bottleneck, never typing speed. Measured it last summer a few times trying to beat my daughter and I think it was about 60wpm. For coding if I know what I'm doing its probably 30wpm in my editor.


Yeah, these days, IDEs like JetBrains are powerful enough that you can pull off significant chunks of refactors with a few right clicks (any sort of renaming, moving of methods, or deletion of code)

Typing will get you places faster, but some of the most productive tools we have don’t rely on it at all.


do you write documentation, though?


Define “efficient at writing code”. I find that for many people this means something primitive and absurd like typing faster or remembering things better.

Being efficient in the perspective of writing new automation so you don’t have to do the same manual things again can easily make someone a 10x (or greater) engineer. This typically means writing your own tools and not hoping some third party package or framework will do it for you.

Conversely the same is true. If a 10x engineer is in an environment where they are prohibited from writing original tools they will just be average or much worse when their motivation is destroyed.


I agree with the compounding impact of self-automation.

Also, some people just keep growing their creative outlook. They are not afraid to try solving bigger & bigger problems.

I worked with an engineer (individual contributor) at a company with a lot of related software products.

Time and again, he would hibernate then come out with a new well developed tool or product that either impacted the entire product line or started a new product line.

Once the value of what he had done was obvious, and his clever design had stabilized in a form understandable to others, he would hand it off to a team of great developers to polish, release, update etc.

10x impact would be an understatement.

If we measure impact, it is easier to see where 1x, 5x, 10x, or 100x can be achievable and verifiable.

Also x/2, x/10 or (and I have seen it) -x. One team leader, -5x at least, until they were removed.


Yeah a traditional 10x engineer would know how to create/use leverage to enable much more work to be done faster in this case creating a great tool, reminds me about a car mechanic how he showed me this weird tool and said it was one of the most useful tool he’s fashioned


We should not forget that different people have different talents. That creative superstar guy may have been great at coming up with new stuff but terrible at maintaining stuff. Or at debugging some urgent prod issue under time stress on a saturday evening. You need those people too that enjoy the day-to-day maintenance to keep the house clean, or the occasional firefighting. A good team contains all those types and they complement each other.


In this case, they were great maintaining stuff too. They remained available to dip into that work whenever it helped.

But as a rule, it would not have been challenging enough to keep them engaged for long. And it would have been a waste of their talents for themselves and the organization.

It wouldn't just have wasted their time directly, but wasted the opportunity for less (but still A-grade) developers from diving into a new area and maximizing its value. There was still lots of creative work to be done. These were not trivial new areas.


> they will just be average or much worse when their motivation is destroyed

Ouch


That's true, and I'd lean on the "much worse" side. When you join a project that has very minimal tooling and almost no automation, you're facing a dilemma: should I focus on improving the tooling while delaying features or should I deliver features while making future improvements ever harder to make?

The problem with this is that neither is good for your mental health. Unless you're joining that project with a specific goal of improving its development process, don't bother. Run away. Fast.

Especially if there are already other devs that are militantly against your take on dev process improvements. The level of emotional distress is hard to fully appreciate without experiencing a situation like this. Suggesting the simplest and most obvious things becomes a brutal fight that you're guaranteed to lose. Living without them makes you miserable in its own right. You're trapped between a rock and a hard place with no way out.

If out of various kinds of psychotherapy you pick the one based on an experience of living in a Nazi concentration camp as the most applicable - it's time to run. Really. Don't look back.


The other thing is making better choices of what to make, how to make it, and most importantly things not to do. That can often save 9x from most anything.


Yeah, being able to negotiate with the PM to still solve the problem in a easier way than initially proposed is probably where I've had the most impact over the years.


I discussed my experience using ChatGPT in HN before and I was accused of being skeptical. But it isn't true at all, I think LLMs are a wonderful tool that will change software development. However I'm not worried about losing my job.

As you say, I find the act of writing code is hardly the bottleneck to my productivity. One of the best case scenarios, LLMs become the ultimate high level language. In that case, Software Engineer roles would switch from coding to more of a mix of old school Software Architect, PM and entrepreneur.

By the way, I find interesting to run the argument of "coding is not the bottleneck" in reverse. What if you design an organization so that coding is the bottleneck? How transformative would LLMs be?


> As you say, I find the act of writing code is hardly the bottleneck to my productivity.

When people claim ChatGPT can replace software engineers because it can sort of code is always puzzling to me. 20 years into this career, and writing code is only a minor part of my day-to-day activities.

A part I do enjoy doing, but still minor part nonetheless.

> What if you design an organization so that coding is the bottleneck? How transformative would LLMs be?

The funny thing is that although ChatGPT has been an amazing coding assistant tool, it can't meaningfully write code beyond boilerplate to me. I use it more as search engine on steroids, rather than actually relying on the code it writes.

A few times it even got into a hallucination loop going through different flavors of uncompilable code when I asked "how to do X using library Y", and refining the question was getting nowhere.

ChatGPT is an amazing tool that provides me value beyond the 20 bucks per month I pay. But still has far too many limitations. It's sort of funny to see people LARPing that it will kill knowledge jobs in the near term.


What amazes me when I talk to people that work in "non-tech" companies is how common boilerplate work is actually out there. Man-millennia of effort to move data from one system to another, or generate a report, or trigger something. The first AI MuleSoft or similar platform is going to kill it.


> boilerplate

Design Patterns are said to compensate for features missing in a language (e.g. Visitor for multiple dispatch). ChatGPT is sounding similar - solving accidental complexity that "shouldn't" be there in the first place.


Yes. I find it valuable for writing boring translation code when.

Stuff like

   match network_foo_int {
      TYPE_A => Foo::A(read_a()),
      TYPE_B => Foo::B(read_b()),
      ...
I've also found it reasonably good at writing the inverse of the preceding function.


No doubt chatGPT can't replace coder's yet. If current pace of improvement continues (a big if!), it defnly will be able to in a couple of years. Honestly I expect it to happen.

Though I don't expect software dev job to go away. It'll just change in nature. Less typey typey, more managing the LLM.


To me it looks like the rate of change has already slowed having played around with gpt2 and used copilot a fair amount, chatgpt, and now chatgpt with gpt4.

There's also not orders of magnitude more existing data we can feed it anymore.


> Honestly I expect it to happen.

Not as a language model. Other leaps in technology would need to happen before it can code.


I mean saying that using ChatGPT will make you one of the most elite coders in the world is a pretty high bar no?

I have been skeptical about all this, but decided to play with using ChatGPT for coding a task I had. It wasn't perfect, either in workflow or in final results but what was clear was that not adopting it will very quickly make you lag behind your peers.

I think the additive bonus is especially high for senior devs who are fast, in that those will be faster in grokking the generated code and better at guiding it to the right solution.

I guarantee you that in five years anybody who cares at all about their productivity will be using it constantly in their workflow, probably much sooner.


The tooling will be pretty interesting.. basically prompts per file and a overlaying diff per prompt.. Focus mostly on the almost functional design of software, with well defined input and output as the most important criteria. Also lots of "meta" tags, that filter the trained in network layers for "fast" code, "secure" code, "minimal working smaple" code by injecting addtional prompts. The trick is to use the prompts as a selection menue for the data variations stored within the Network.


I imagine that a web service could be specified in one file. At the top, a description of the service, including the language it is to be written in and how it will be built and deployed. Then an outline of the endpoints and their functionality.

A builder bot could then ask a series of questions and get permissions such as “can we use X package?”, and then update the specification before performing the build.


Chatgpt is an amazing learning tool, currently it helped me understand scripting very quickly and expanded what I was never would thought to do.

I know I could have used google but the way google presented me is not easy to find and digest, you have ads, you have article that is not exactly what you are looking for and the code examples are not exact. Gpt eliminates all that time and effort. It almost eliminates laziness


You need to use it now because it works, not perfect but it reached the 80/20 for enough things, and you still need to do the last 20. Even if not, it will get you some gains if you use it when it fits


Can’t believe you were downvoted. Nuance and its appreciation are becoming rare.


I agree with your sentiment. I've generally advanced in my career by being good at deciding what problems to solve/not solve.

The problem is many, many people are not in a position to advance that way. I've worked in startups where you're almost always given an ambiguous problem and rewarded for meaningful outcomes.

Many people are working as part of team/projects where the general problem and approach are already determined for them.


Oddly, I am most productive when I have time to be slow. I make 10x better decisions about what to do, how to do it, and in many cases why NOT to do it.

This isn’t very conducive to corporate work though. It’s about timelines and features promised to customers.

I find I’m 10x more productive in the long term on projects I’m a sole developer on or when working with a small knit team. And I don’t measure that in lines of code or number of features.

I wonder if ChatGPT can help there.


> This isn’t very conducive to corporate work though. It’s about timelines and features promised to customers. [...] I wonder if ChatGPT can help there.

Obviously not, it will just make expected windows shorter and shorter.

"This would take a C developer a month, but you use Java so 2 weeks should suffice"

"This would take a Java developer two weeks, but you use Python so a week should suffice"

"This would take a Python developer a week, but you use ChatGPT so I'll expect it sometime tomorrow."


> Either ChatGTP can do the whole thing itself (not yet the case) or it is a minor productivity boost because the hard part is articulating the problem.

What a ridiculous dichotomy, or course there's plenty of room in between. The hard part might be articulating the problem, but the other part takes time. Make that part not take time and you can do more of the hard part.


I think code-generation is a red herring. I'm more interested in things like:

- Explanation/research ("how does this work?")

- Code analysis ("tell me if you think you see any bugs, refactoring suggestions, etc in this sprawling legacy codebase")

Things that feed into the developer's thought process instead of crudely trying to execute on what it wants


Unfortunately, LLMs are still prone to make facts up, and very persuasively. In fact, most of non-trivial topics I tried have required double-/triple- checking, so it's sometimes not really productive to use chatGPT.

You are correct that I made an error in my previous response.

I apologize for the confusion I may have caused in my previous response

I appreciate you bringing this to my attention.

I apologize, thank you for your attention to detail!


I asked it to explain how to use a certain Vue feature the other day which wasn't working as I hoped. It explained incorrectly, and when I drilled down, it started using React syntax disguised with Vue keywords. I definitely could have tried harder to get it to figure out what was going on, but it kept repeating its mistakes even when I pointed them out explicitly.


I've only been using it for a few days and already find this post amusing. Almost hit a "come on, man, just get it" moment.

Incredible technology, though. Feels like whoever decides not to use it will be at huge disadvantage real soon


Which is exactly why this use-case would be better. "Give me suggestions/ideas, and then I'll take them under consideration"


Part of why I don't use ChatGPT very much for work is that I don't want to feed significant amounts of proprietary code into it. Could be the one thing that actually gets me in trouble at work, seems risky regardless. How is it you're comfortable with doing so? (Not asking in a judgmental way, just curious. I would like to have a LLM assistant that understood my whole codebase, because I'm stumped on a bug today.)


I'm not doing it right now, I'm more imagining a near-term product designed for this (maybe even with the option to self-host). Current LLMs probably couldn't hold enough context to analyze a whole codebase anyway, just one file at a time (which could still be useful, but)


Even further, the big names in software engineering will likely figure out how to use these models far more effectively than any old engineer.

Fabrice Bellard is utilizing transformer models for lossless compression, for example: https://bellard.org/nncp/


Writing code quickly may not help the mythical 10x engineer, but it might help a 1x engineer become a 5x engineer.

In relative terms, the unaided 10x engineer is now only a 2x engineer as his peers are now vastly more productive.

That's huge.


The GP is undermining their own point by even mentioning “5x, opening the door to think about this in linear terms. But the concept of a “10x engineer” was never about how fast they produce code, and the multiplier was never a fixed number. The point was that some engineers can choose the right problems and solve them in a way which the majority would never achieve even with unlimited time.

As an example, if you took away the top half of the engineers on my current team and gave the rest state of the art Copilot (or equivalent best in breed) from 2030, you would not end up with equivalent productivity. What would happen is they would paint themselves into a corner of technical debt faster until they couldn’t write any more code (assisted or unassisted) without creating more bugs than they were solving with each change.

That doesn’t mean the improved tooling isn’t important, but it’s more akin to being a touch typist, or using a modern debugger. It improves your workflow but it doesn’t make you a better software engineer.


Maybe in 2030 the AI will be able to respond to that situation appropriately, like it won't just layer more sticky plasters on top with each additional requirement and make a mess but will re-evaluate the entire history of instructions and rearchitect/refactor the code completely if necessary?

And all this with documentation explaining what it did, and optimising the code for human readability, so that even with huge reworks you can still get the gist in a time measured in hours rather than it taking, what, weeks to do manually?


I mean, looking at the math that powers these models I don't see how they can replace reasoning. The tokens mean absolutely nothing to the algorithm. It doesn't know how algebra works and if you prompt ChatGPT to propose a new theorem based on some axioms it will produce something that sounds like a theorem...

... but believing it is a theorem would be similar to believing that horoscopes can predict the future as well.

Maybe some day we'll have a model that can be trained to reason as humans do and can do mathematics on its own... we've been talking about that possibility for decades in the automated theorem proving space. However it seems that this is a tough nut to crack.

Training LLMs already takes quite a lot of compute resources and energy. Maybe we will have to wait until we have fusion energy and can afford to cool entire data centers dedicated to training these reasoning models as new theorems are postulated and proofs added.

... or we could simply do it ourselves. The energy inputs for humans compared to output is pretty good and affordable.

However having an LLM that also has facilities to interact with an automated theorem proving system would be a handy tool indeed. There are plenty of times in formal proofs where we want to elide the proof of a theorem we want to use because it's obvious and proving it would be tedious and not make the proof you're writing any more elegant; a future reasoning model that could understand the proof goals and use tactics to solve would be a nice tool indeed.

However I think we're still a long way from that. No reason to get hyped about it.


Maybe the prompt engineer will learn from the conversation and remove their dead end questions. Every time you ask a question gpt is just re running it all includinG the previous context. If you start over with a trimmed conversation it's like the player was never there.


... if churning out boilerplate is your definition of "productive".


Churning out boilerplate is a large part of the job. The quality of that boilerplate is part of what makes great engineers great engineers.

Knowing when and how to use dependency injection, making code testable, what to instrument with logs and stats, how and when to build extensible interfaces, avoiding common pitfalls like race conditions, bad time handling, when to add support for retries, speculative execution, etc. are part of parcel of what we do.

If ChatGPT can help raise the quality of work while also increasing its quantity, that'll be a huge leap in productivity.

It's not all there yet, but I've been using it to write some simple programs that I can hand out to ops / business people to help automate or validate some of their tasks to ensure the smooth rollout of a feature I've developed.

The resulting ChatGPT code--I chose Go because I can hand over a big fat self contained binary--avoids certain subtle pitfalls.

For example, the Go HTTP client requires that you call Close() on a response body, but only if there's no error.

The code it spat out does indeed do that correctly. And it's well documented.

It's far from perfect; I've seen a subtle mistake or two in my playing around with it.

I'm now no longer so much an author but an editor for basic stuff. I now have my own dedicated engineer with a quality level that oscillates wildly between fresh intern and grizzled veteran.

It'll only get better over time.


If churning out boilerplate is what your employers sees as product, does it really matter? Coding udeals are valuable in a vacuum but most companies want those PRs merged, technical debt be damned.


If this is the case they need to up their abstraction game.


Isn't most code boilerplate belts out the choir.


what?


"Isn't most code boilerplate?" belts out the choir/peanut gallery/greek chorus


> A team of people who write code effectively is easy to do.

Based on my experience, you’d be surprised how hard it is sometimes. I have a team of 10 engineers that failed to ship a basic CRUD UI after several months (and it wasn’t even close to done). Eventually it got taken over by one individual eng who finished it in two weeks.


That sounds like it was because there were 10 engineers assigned to a way too small task, not becomes teams are ineffective.


I've seen this happen several times before, and it's never been because people weren't typing fast enough: it's always bad or nonexistent management, unclear accountabilities, or lack of technical direction or oversight (one failed CRUD project was written purely functionally by an engineer in Scala + Scalaz, absolute insanity). None of these are solvable by LLMs.


Ive seen this even at top software companies


Can't agree more.

The problem isn't generating code. It's articulating a functional specification with enough precision and clarity that you can solve for the unknown, which is the program that satisfies the specification.

I agree with the conclusion that tools like ChatGPT won't be replacing programmers any time soon. And it won't be helping us to write programs until it can properly reason about the information it is trained on and prompted with. The problem with natural language is that it is often too imprecise. When you get to concurrency or optimizing data layout an LLM isn't going to understand anything about the problem space and the training data probably won't contain enough examples to produce something remotely plausible let alone good or elegant.

Writing, thinking, and asking the right questions will still be important for quite some time. We're not in the age of having a magical genie in our computers that can do all the work for us... yet.

Although using it as a procedural generation tool for fake data is nice... still not as nice as writing a property test with generators and shrink... but as a little time saver for quick prototyping, neat.


Uh no, ChatGPT is good as being a better (contextual) search engine, a rubber duck that explain code to you (which stil bad but keep improving), and a mid-level code generator junkies that can generate crud operation, unit tests or transpile code between language.

All of it will reduce mental load for senior devs and enable them to tackle more problems or more difficult problems.


> a mid-level code generator junkies that can generate crud operation, unit tests or transpile code between language.

Not really though, at least not yet. I've seen some impressive results for trivial cases: the type of thing you'd see blogged on new web devs blog 10x over, but I've yet to see anything that wasn't trivial and worked come out of GPT.


It's ChatGPT, not ChatGTP. GPT stands for generative pre-trained transformer. [1]

[1] https://en.wikipedia.org/wiki/Generative_pre-trained_transfo...


I think reaching peak optimization as an engineer is a mix of:

- Knowing or having control of the tools you use

- Knowing what problems are most important to revenue and which are the greatest risk

- Knowing the language well enough to plan ahead without much labor

When a company I worked at mandated an IDE it took me a while to be productive beyond typing speed. When I discovered my company had support for generators, I was no longer spending forever on boilerplate REST code. When I know my test harnesses well it's easier for me to prove functionality in a smaller feedback loop.


> But it won't make any old engineer a rival to the big names in software engineering.

This stood out, so I looked it up, and it seems Richard Hipp is 61 years old. I guess it is more about the "any" than the "old". Or maybe old engineers are engineers doing it the "old way" (pre-chatgpt)?


"any old x" is an idiom that references "any random" or "any", indicating x being interchangeable or not special in any way. It has nothing to do with age.


>>Either ChatGTP can do the whole thing itself (not yet the case) or it is a minor productivity boost because the hard part is articulating the problem.

Telling any AI system what to do will be the biggest problem. A lot of toy like algorithm problems can be easily stated. Stuff like inverting binary trees, sorting, and your myriad different Leetcode stuff is the easiest thing to do here.

Real world software has if/else statements peppered all over large code bases/domain logic. Lots of IO exceptions, lots of domain specific logic, all of this is nearly impossible to dictate in words. And even more you will eventually need some concrete way of describing things, which in all honesty will look a lot similar to code itself if you have to get it right.

Modifying code will likely be even more harder to describe in talk than in text.


Came here to say this.

Over my career, the higher up the pay grade I went, the less actual code I was writing and the more it was figuring out what to do, what's priority, how the existing code all fits together, evaluating risk, and how to make the right surgical incisions to accomplish the goals.

Glorious and quite rare is the moment when a problem is clearly written up by a PM and the codebase is greenfield or clear enough that just pumping out code is happening and taking up my day.

Most senior software engineers are less programmer and more analyst.


I think the headline was being tongue in cheek. It probably generated more clicks than “5 clever examples of using ChatGPT via CLI for programming and data science”.

Still a good post and reply.


Yup: Privates talk skills, Sergeants talk tactics, captains talk strategy, but generals talk logistics.

Slinging code is skills & tactics at best. It can only have a small multiplier.


The most direct example of this to me is in video games, there are 10s of thousands of failed games with amazing art and coding behind them, but then weekend projects like Flappy bird, Vampire Survivors and Wordle make it big. They just were lucky enough to find the right angle, a team of 100 PHDs working 10X as long would find less success.


Maybe I'm misunderstanding your last sentence - but if churning out boilerplate is something any old engineer can now have an AI do, won't it up the chances that someone previously overlooked makes a brilliant call about what problem to tackle and actually has the resources to execute on that call?


I'm doubtful. Being effective as dev usually involves a conscious effort to focus on the context of where we are, where we're going, and what's needed to get there. Also, the ability to recognize and avoid rabbit holes.


Sure, but isn't zooming out to that 'big picture' perspective a very different skillset from the focus needed to write functional code?


Good points. I feel obligated to tell you that it is spelled, ChatGPT not ChatGTP.


Apologies, it is very common for an LMM to make this kind of mitsakes.


For clarity on anyone wondering what LMM stands for. Its Large Manguage Model. Apologies, couldn’t help myself either


Indeed it's not about what you type, it's about what you don't.


I'll add that programmers don't produce code. They produce business guarantees. And their main tool to do so are:

* Understanding the value proposition as a human, often as through a conversation; often with hidden criteria that you'll have to uncover

* Making sure the code respects the value proposition. As reading other people's code takes more time than writing it, ChatGPT won't help that much here.

----

One development to this story I'm anticipating for though is connecting ChatGPT to formal proof systems, and making sure it has sufficient coverage with 'internal thoughts' that are logic-verified (by opposition to killing the problem big-data-style). Sort of like the internal monologue that we have sometimes.

I say short and medium term we're sort of ok; but I don't know about 10 years from now.


Chain of thought reasoning is being explored with some success. It’s better for it to be explicit output instead of internal, remember that the next token is being predicted based on the previous ones.

Your two points may not be as sound as you think. The second one can be defeated now since ChatGPT is quite good at generating descriptions of what code is doing. The first point is more subtle, you can iterate on code with ChatGPT but right now you need to know how to run the code to get the results before providing feedback. Once tools are developed for ChatGPT to do that itself then it could just show the results and a non technical person could ask for follow up changes.


Clearly most programmers do produce code, although I’ve worked with a few who seem to do so only once all other options have been exhausted.


Iteration speed helps.

That said, the real work is insight away from the keyboard.

I think your "what problems to tackle" is overlapping management or business strategy - though yes, it absolutely is the greatest multiplier.


> Being efficient at writing code can barely make someone a 5x engineer.

In a winner-take-all economy, usually the winner is the person with the ability to do something the slowest.

Not always, but usually.


Say more


As the winner they are going to have the luxury of taking a few weeks to get back to you on this one


Usually the person with the ability to create the best products and services is the one with the deepest reservoir of background knowledge and aesthetic taste that they can apply to a given problem.

So, for example, someone who can productively spend 100 hours working on a blog post is probably going to get win out against someone who can only productively spend 5 hours working a blog post on the same topic.

And given a society where 99% of the rewards go to 1st place, 1% of the rewards go to 2nd place, and 0% of the rewards go to everyone else, having this ability is usually more valuable than the ability to pump out a little more work in some fixed amount of time.


Unfortunately the person who only spends 1 hour working on a blog post, but can spend 100 hours cranking out 100 blog posts, wins out. See Simon Willison, for example.


You're not wrong, but it's ultimately the same background and skillset as what I'm talking about that enables that also. E.g. Simon Willison could easily sit down and write a bestselling book if he wanted to.


But if the support tool gives you the chance to iterate faster you will find the solution faster. And the bulk of every solution takes time to code.


Iteration time is important but part of that is knowing and learning what to iterate. Therefore there will still be a human in the loop, and by Amdahl's law a 30% speed up will never produce more than a 30% performance improvement.


It's definitely useful, but IMO it's not 5x or 10x and of course not x1000 force multiplier. More like a very good and useful tool.


is it me or did he just misspell chatGPT 3 times and nobody noticed?


chatGPT doesn't like that, and will surely take revenge eventually


I vastly read that as a wacky joke though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: