Hacker News new | past | comments | ask | show | jobs | submit login
Things I’ve learned in 20 years of programming (daedtech.com)
531 points by galfarragem on Nov 22, 2019 | hide | past | favorite | 223 comments



Some things I've learned after 35 years;

  1. learn how to communicate: being a good developer requires as much (more?) social as it does technical skills. I would recommend formal training and practice.
  2. question all assumptions.
  3. there is no silver bullet (read mythical man month).
  4. fight complexity and over-engineering : get to your next MVP release.
  5. always have a releasable build (stop talking and start coding).
  6. code has little value in of itself; only rarely is a block of code reusable in other contexts.
  7. requirements are a type of code, they should be written very precicely.
  8. estimation is extrapolation (fortune telling) with unknown variables of an unknown function.
  9. release as frequently as possible, to actual users, so you can discover the actual requirements.
  10. coding is not a social activity.


Spot on - 40 years of coding, 33 professionally, and your list resonates with me. I'd also add: write code in tiny pieces where you don't do the next piece until you're confident the current piece works. I get fidgety when I have a bunch of code I don't know works.

Mostly, though, I can tell a grizzled veteran (and I've known some that hardly had any years under their belt, it's all in the soul!) because they rarely claim to know The Answer, and have a weary self-confidence that, even though they don't know how, they will get the job done.

Last year, after experiencing several individuals that fit the term, I came to calling some people "Medium Developers" - generally falling in the 4-8 years of experience, that get much of their knowledge from Medium articles, and believe the dogma wholeheartedly. Complete lack of nuance in all things, and are very evangelical about their dogma. Shitty managers are enthralled with them, and the code written (slowly) in such situations is horrific. "Throw it out and do it the Right Way (tm)" is the mantra.

Honestly, the blog post in the OP sounds like a Medium Programmer. They're worship of TDD is one symptom!

"I don't know, but we'll figure it out" is what I like to hear, versus, "This is how it Must Be Done!"


> write code in tiny pieces where you don't do the next piece until you're confident the current piece works. I get fidgety when I have a bunch of code I don't know works.

I think getting to this point really helps with the "flow" issue ("coding is not a social activity"). The more you can isolate small steps, you can more easily weather interruptions.

> They're worship of TDD is one symptom!

My suspicion is that this person is very good at writing the tests, at the right level of isolation and detail, and that watching them work would feel less dogmatic that the text of the blog post suggests.


> "I don't know, but we'll figure it out" is what I like to hear, versus, "This is how it Must Be Done!"

I've had this argument multiple times recently. Usually when I say the former the response is something like "But how can we really estimate it or plan it before we know exactly what we're doing"

Writing overly detailed requirements without coding gives me the same feeling of fidgetiness you mention. My attitude is "Plans are worthless, but planning is essential" , and you have to cut off design at some point because the risk (that you discover something in coding that invalidates the plan) outpaces the value (of knowing what you're doing).

Reminds me of another problem I've seen recently - an obsession with writing code that is "overly correct". There are times where a small change of a few lines accomplishes a thing but produces some undesirable side effect - it breaks an expectation somewhere, for instance. But the alternative is rewriting a huge portion and adding a ton of code and adding complexity as a result, even if the methods and their responsibilities are more clear.

"Throw it out and do it the Right Way", in other words. We've become very conditioned (especially mid-career) with focusing on never allowing tech debt and doing it the right way. But sometimes you actually add complexity in trying to achieve perfect cleanliness. And in fact, even if the resulting code is slightly less complex, there is a lot of risk in the rewrite.

At my stage of my career - 15 years - the biggest conflict I see is between business that wants estimates and the inherent unpredictability of writing software. Business wants the list of things I'm going to get done, meanwhile I want priorities. Are you able to say "Hey, this thing I thought might take a week is actually a quarter long project, should we set it aside knowing that?" Often the org is simply not flexible enough to handle that (or handle scoping it down so it can get done in a reasonable amount of time).

I'd almost say what you describe is the sign of a mid-experienced manager, too. They've realized it's important to shield their team from some organization problems so they can write quality code (moving past the "yell at them until it gets done" phase). But they haven't really learned how to get past the mid-level engineer obsession with beautiful code, or to handle the general complexity of the inherent clash between business goals and engineering goals - they've gone full towards the engineer side of that spectrum.


This is unreadable on mobile.


I'm also having a hard time swiping the text. So ahead of the poster:

1. learn how to communicate: being a good developer requires as much (more?) social as it does technical skills. I would recommend formal training and practice.

2. question all assumptions.

3. there is no silver bullet (read mythical man month).

4. fight complexity and over-engineering : get to your next MVP release.

5. always have a releasable build (stop talking and start coding).

6. code has little value in of itself; only rarely is a block of code reusable in other contexts.

7. requirements are a type of code, they should be written very precicely.

8. estimation is extrapolation (fortune telling) with unknown variables of an unknown function.

9. release as frequently as possible, to actual users, so you can discover the actual requirements.

10. coding is not a social activity.


A bit OT, but I really wish HN supported markdown in comments - with the current system, I often find it difficult to format as I intend, especially while on mobile.


Could you please elaborate the "coding is not a social activity" or offer a reference. Thank you very much in advance! reply


np. The "Stuff" that surrounds coding is very social (point 1), but the actual activity of writing code is very inward focused and ideally uninterrupted. The idea of FLOW is a very real thing, its very zen: hours just fade away until you look up and everyone has gone to lunch. I'm tilting at the open office windmill and idea that whilst I'm actually coding, I need to interact with others.

Being a "Developer" (planning, designing) is very social, however writing the actual code, for me at least, is a very inward activity and suits my quiet personality. The problem here is that non-coders don't understand this and it can cause a lot of friction when working in the usual group office environment.

It's a catch 22, people want me to code for them, but then they have a problem with me socially when I'm trying to code ;)


This is off-topic, but I have never experienced the "flow" you mention but I've had a decent career so far. I'm not the best or anything but I've done well in terms of promotions or projects shipped.

I've never lost track of time coding like you say - I stop at 12:30pm for lunch and then at 5:00pm to go home. I've always wondered if I am doing something wrong or maybe I just like programming enough as a job.


A job is for wage. You don't mention programming as a hobby. Flow happens when you're personally engaged and truly interested in the solution you're building. This happens more on personal side-projects than at work, where the modern work-environment makes sure to interrupt us as much as possible. If programming is just the objective of codifying some business logic for a company, you won't become personally engaged. Also, non-trivial and complex solutions may break flow, as you have to walk and think more. Flow is more for those passages of coding where you jot down simple, but lengthy solutions, then realize a simple refactoring will DRY up the code. Then after 5 such iterations with some YAGNI, you've skipped lunch that day! You became engrossed in the work itself. Music may help, but becomes a distraction for any deeper design-thoughts.

Nothing magical about flow, but feels very productive even though one might be working on the wrong solution all along! For gaming industry and huge codebases, programmers absolutely need to get into flow in order to manage to complete it all in time.


To enter a flow state you need to be working on something moderately challenging, but within your ability. It has to be a task that takes concentration and will take at least an hour to complete. You need to be relaxed and cannot have any distractions around you. It takes a good 15-20 minutes to get into a flow state.

For me flow is very pleasant but I don't get to experience it often, maybe once a month. It always feels very productive. I can be in a light flow state where I am still slightly aware of my surroundings, but on rare occasions I have experienced deep flow where I am aware of nothing but my internal monologue and the code. I daydream about being able to get into that mode sometimes.


Being able to enter that state isn't reliable even for people who do it often. I've never been able to do it at work, it happens somewhat often when I'm doing home projects if my environment is suitably quiet.

I've found that I experience a kind of embarrassed reluctance to fully commit all my attention to something while I'm at work surrounded by people, that prevents it for me.


I get it reasonably often when I'm coding my projects late at night, when there's nobody there to bother me. The downside is that it messes with my sleep, since I'll start at midnight, write some code, think "okay I need to go to bed by 1 because I haven't been sleeping well" and I look and it's 5 am.


Sad to hear. Every job I've had I've been able to get into that flow. But Ive also always had my own office or work from home. 20+ years of doing this professionally and as a hobby.

My advice is to find some way to take some pride in the work you do and or change employers when that flow stops happening. life is too short not to.


Our minds work very differently... if you're good at producing results and enjoy what you're doing keep doing it :)

Some of us do most work in stretches of "flow" that involves disconnecting our minds form the outside world and from the people around us. We "go deep" like in a state of trance for stretches of time. We probably work faster, but we also need more and longer breaks between these stretches of hyperintense work. Others like you (maybe most?) can work while aware of time and their surroundings. It's good that we're different, each approach has advantages and disadvantages, we all discover what works best for us individually... just respect the needs of the people around you, let them work however fits them best!

As a "flow-er", the most important skill I've learned is how to be productive even when I can't get into flow. As a non-flow-er, you should probably learn the opposite: find tasks and ways of working that can help you discover "flow"... even if you won't experience it very often, it's worth experiencing it!


I was going to type a more detailed response, but the people below have done an excellent job already. The body has to be balanced, distractions at a minimum, the mind open and curious and the problem "not too easy and not too hard". I like to think of it as "chasing the white rabbit". One thing that can help is to write a detailed plan of what your coding session is going to achieve, with some stretch goals. If you do it right you will enter a cycle of achieving small goals leading to refactoring and bigger goals and then you.... look up and the whole team has gone to get Tacos without you again.


For me there are two big contributors to flow (and flow may not be the same for everyone): requirements are very well understood and willingness to code the requirements.

If it's boring or boilerplate it doesn't work. Otherwise, some kind of flow normally happens. I believe you require some amount of concentration hence the need to not be interrupted, and it has to be a bit challenging to require concentration.

An example would be an interview or college exam: you are pressured and needed to stay focused, yet the time passed very quickly.


Many artists are also similar, who experience flow in creating.

But then there are many successful ones who follow a schedule and still create amazing body of work. For example - Roald Dahl. Based on the notes in his books, he was someone who followed a daily time schedule and he did alright.

So, do what works for you I guess.


I agree, I've heard of 'the flow' but it's never happened to me.

Perhaps related, perhaps not, I never 'sink in' to films I'm watching. My old GF seemed to be able to, but I'm always on the outside looking in, never immersed. I wonder if that's linked.


I’m the person you replied to - I’m a big film fan but I have the same “issue”. The only time I can get immersed is at a movie theater and even then it’s rare (last time I was immersed it was with Parasite). But again, I enjoy the film and am able to remember all of it so not sure if it’s a problem.


I'd not intended to suggest it was a problem, only that it was perhaps related to the inability to get into the flow. Merely an observation.

Parasite the korean film?


What do you think of pair programming and the more extreme nature of it called mob programming. Have you seen either of those two things work well in any context?


Some thoughts;

1. At some point in the 90s there was an "Extreme Programming" movement, or XP as it were. I distinctly remember the dogma built up around this, to the point where I started being forced to do pair programming in a formal sense, all day. Thankfully I just quit. I guess I prefer my coding to be less than extreme.

2. As others have mentioned, short term pair programming (never works >2 imo) is an excellent way of transferring knowledge (a model) to someone else, or to make progress on complex problems when you are stuck (the rubber duck effect is real).

3. I'm afraid I really must be getting old as I've never heard of "mob programming", and to be frank I've spent the last hours being horrified that such an idea exists. I guess in a way code reviews can be like that sometimes. lol.


Yeah but just about nowhere did XP properly, though many places claimed to in the late 90s, early 00s whenever it was the extreme hotness. It was one of those peculiar cults everyone talked up, but just about no one understood or had even read the book.

One place I was at did everything, including onsite customer etc. In context pair programming managed to stay enjoyable for the entire time and they didn't seem to have high turnover. It actually kept on feeling more productive. I was there around a year (contract). I never encountered it again so can't comment if that was sheer luck or the people (or project) they happened to have...

There was a spell when every job ad seemed to want to claim it, but basically picked pair programming and two to five of the bullets and handwaved or ignored the rest. Or merged it with plenty of old school waterfall project management, Gantt charts, random bits of UML or some other tangent. I guess I'm not surprised so many ended up hating everything around XP and pair programming as it took many forms. It made for a few surreal interviews after that one encounter of an employer who'd adopted it fully and properly.


Any practice which almost no one gets it right, may be there is something inherently wrong about the practice that so many smart people are failing to implement it right.


I've done non-forced pair programming for suitable tasks that were really math heavy in a controll algorithm where I essentially double checked the math. Worked that time really well but always doing it would make me hate my job.


In my experience pair programming works wonderfully for short, intense and complex problem solving scenarios. The extra pair of eyes and a second brain gnawing at a nasty problem is a good way to augment each other's capabilities.

In fact, I dare say that pair debugging is where the practice really shines. Doing it all the time, for all tasks? That won't fly.


I’ve only seen mob-programming to be adopted by teams of junior-slackers, but YMMV


Pair programming was never intended to be a full time mode of development. Half an hour once a week (or whenever) is good enough, and can help with mentoring junior developers. But it is really moronic to do that full time.


"intended"? By who? Based on what?

I worked at companies that had full time pairing and it worked well.

When you tried full time pairing, what were the issues you observed that made you conclude it was "moronic"


.. by the people who started it and popularized it. I have never tried it full time because it's obviously patently moronic to do it that way, and would destroy all enjoyment of the job. I have worked with highly talented developers and can not recall any one of them wanting to pair full time. I have received huge benefits from occasional pair programming with senior developers in my less senior days. However, I need to try full time pair programming as much as I need to try jumping into a fire to verify that it's not a smart idea. If my employer forced me to do that I would quit on the spot.


Pair programming works if used as a mentorship. You’ll need to have a senior dev and a less experienced one.

But I guess this is but the original intent of pair programming


I agree with that and can also relate to this point (as I experience exactly this in my current project) my colleague already complained about being interrupted many times already. The problem is that the non coding colleagues want to ask just a 'quick question'.

Thanks for the quick reply.


Not sure I agree with 6. I find that if you develop for reusability and you're a bit creative you can actually reuse a lot (though it might depend on how tightly you define "other contexts"). I have to say that functional programming plays a very important role in this though.


all reusability have limits! usually work is carried out to satisfy a set of requirements. as long as requirements are matching old ones in a different task then the product can be reused but in case there is deviation then alterations will be required to address the differing bits. we may predict future requirements and address those too through generalization but that must have limits as addressing predictions is by definition not certain, may need refactoring still in the future while using actual resources. even the stl of c++ needs rework as it does not address certain requirements or address outdated ones. no code will be reusable completely all the time, in fact only reusable if specific circumstances match. generally speaking generalization is impossible.


What formal training do you recommend for cultivating social and communication skills?


That's a really good question; I'd pay good money for developer focused communications training. Some practical ideas;

1. try to engage at work with more activities, you'd be surprised what activities you can help out with, or maybe arrange.

2. never pass up a customer focused activity, volunteer and push to be involved.

3. ask a sales or business guy at your company to be a mentor.

4. talk to random people beyond small talk (parties?).

5. read books (mileage varies).

Something like "toastmasters" for coders would be so cool.


Great list. (1) and (10) seem at odds, or alternatively are tautological; I think the important thing is to acknowledge that social and isolated modes are both required to develop software, and that getting the balance right is hard, and isn't the same for everyone.

Seems to take people a long time to figure out (4) and (6).

(2) and (3) and (7) really apply to most things.


Nice list. Thanks. I’m gonna clip that an ruminate on it a bit. I dispute 10 strongly, tho’. Our projects are largely built collectively, for collections of people and machines; use community derived patterns and practices; are typically too large to keep the whole thing in (human) working memory; are better with code-review.


> 10. coding is not a social activity.

I wish multiple tech leads I've interviewed with, here in Berlin, realised that. There's just so much cargo-culting around XP.


Sage advice from an obvious veteran in the field.


I’ve been programming professionally for 20 years too.. What I’ve learned is there’s no end to the “architects”, “mentors”, “coaches”, “team leaders” and “know it alls” who want to condescend and tell me how to do my job.

20 years of programming has taught me to avoid working with people who take themselves so seriously and think they’ve figured it all out. 20 years has taught me the importance of compromise, give-and-take, and following as much as leading. It’s also taught me that there are people who’ve only been doing this for half as long that are 10 times better.


If I would have more than one up-vote to give you, I definitely would.

Although I have less professional experience than you, around 6 years combined in software development and information security, I've also come to appreciate people who find the right balance of taking what they do seriously without being uptight about it.

Which brings me to a trait that almost always puts a dent on my respect for anyone regardless of seniority: not asking enough questions and being quick to make statements.

I loathe this so much that I've turned the opposite into my work mantra: "no statements until there've been enough questions".

IMHO, this "small" tweak changes one's interactions and their outcomes with other co-workers for the better.


Don't forget the self-described "thought-leaders".

Oh you've been ~programming for ~3 years and you're now a thought leader - fantastic delusion.


> there’s no end to the “architects”, “mentors”, “coaches”, “team leaders” and “know it alls” who want to condescend and tell me how to do my job

Shame I can only upvote this once. Avoid like the plague. B Players, natural vs institutional authority etc.


those who figured it all out are fake


This is not the best advice I've ever read, some of it smacks of "silver bullet" thinking. For example, it's sometimes better to duplicate than couple. Within the same component sure don't copy pasta. Also I think TDD is frankly ridiculous; I've seen many teams spending more money on gold-plating test lines of code than actual code. I've seen other, better teams spend all their efforts on full integration testing only. There are no golden rules, it's just hard work.


Glad it's not just me.

While in general the advice on duplication is reasonable I think it tends to get taken too far and the example is a poor one.

Presumably for now Mexico applies a different sales tax % but there are so many unknowns on how Mexican tax rates differ that I'd keep the method separate until I'm certain the abstractions are similar, otherwise you might end up with:

    calculateBill(locale, appliesExemption, exemptionRate, regionalTaxRate, nationalTaxRate, clothing Untaxed) 
Premature abstractions seem worse to me than some duplication.

Besides something like bill calculation should have loads of tests.


True story. Dogmatism is really bad always. If copy pasting into 10 almost identical functions seems resonable just do it. Atleast it obvoius that the code is a mess and that is not hidden in deep abstractions.


Sometimes the requirements just demand lots of arbitrary but similar code, and chasing elegant abstractions is a waste of time and less robust than the nearly duplicate code blob.


Agree, but please leave a comment to that effect so that I don't come into the codebase long after you've left and waste the same amount of time refactoring only to reach the same conclusion! :)


Not a fan of the concept of TDD as is, but I have adopted it as the approach to bug fixing.

Usually I start with reproducing the bug in specs, with at least one or more specs failing due to the bug. Then I go on to fix the bug and see the specs turn green. I think TDD is pretty well suited for the use case.


You should have written that article, I think. As another pretty ancient developer, I agree with more or less all your comments. I think pair programming has its occasional uses, but that's about the only thing I'd add to the stuff you've written. Silver bullet thinking is a big warning sign, and advocating some shiny tech generically (e.g. TDD) means " go somewhere else for advice". I really like this article, for advice on how to write code: https://bit.ly/33fb94w ("write code that is easy to delete, not easy to extend"). Copy-paste is definitely a useful practise, at the right time and place.


You're in good company with Jonathan Blow and, it appears, John Carmack. Teams pushing TDD might be a good smell test for whether they know how to efficiently code.


Until you mentioned him I had never heard of this Jonathan Blow character. The thing is both of these people seem to do game development. If you write code that is close to the hardware and needs every ounce of performance there might be a point to this. It would make applying TDD both harder to pull off and less beneficial.

When one is writing complex logic that is supposed to receive both frequent new features and keep working correctly there is nothing that beats TDD.

Let me attempt to compare this to bowling even though I know nothing about bowling. If you just occasionally like to go out bowling you can use whatever technique you like and you will get better if you just bowl a bit more often. If you want to play bowling on the national level you may at some point have to drop the technique that you made up yourself when you started out. You need to go through the uncomfortable stage of learning a new an better technique and temporarily be worse at it. I think the people not doing TDD are just refusing to go through this because there is not much of an objective standard of quality for developers. We are in a field where code that lacks any sort of quality is standard and where breaking features three times a week is standard. TDD will fix these things.


TDD works great when the requirements are known and there’s little mystery. Waterfall methods oddly enough lends themselves well for TDD, which is kind of associated with hip styles like agile.

The reality is most projects start as exploration. What you’re thinking might not even be possible. Can you even approximate the classes you’ll need yet? For these scenarios TDD fits in awkwardly.

I prefer to do a little back and forth. A little dev. When things are looking stable, a little resting. Towards the end a lot of testing.


I've found the opposite. When I start to write a test, I often realize I don't know exactly what I need to do because the requirements are unclear or somehow conflict with some other assumption of the system. That realization prompts me to go talk to my product owner, which forces them to clarify their requirements and target a more coherent idea of what the system should be.


There is some point to your comment but also something that seem to me to be misunderstanding. Also, part of it seems self-contradictory to me.

There can indeed be exploration stages where TDD does not help that much. E.g, if you are going to interface with a piece of hardware you have never seen before and you need to get some sort of working communication going there is much point to what you say. As soon as you know what sequence of commands makes the piece of hardware work one has arrived in the territory where TDD is highly beneficial. You can then write a test where you require that your software indeed produces the correct sequence of commands and after that you can refactor your exploratory code while having to check that it still works together with the hardware much less often then otherwise would be needed.

The remark about TDD and waterfall seems much less true to me. In my original post I wrote 'both frequent new features and keep working correctly'. That does not describe waterfall at all. In fact, and this is where you messages seems to get a bit self-contradictory, your post actually sounds much more waterfally to me than mine. It contains the words 'Towards the end'. It is waterfall projects that have an end that was envisioned right from the start. Whatever 'agile' means, it seems to have the feature that we never stop adding features and that stuff is supposed to be in a working state in between adding these features. So, if one is actually doing that there are dragons in the sentence 'Towards the end a lot of testing' because it actually turns into 'always lots of testing' which costs lots of time. In case of the hardware example it would also require that every developer has a species of that hardware on his desk next to his computer. As I happen to work in a field where we interface with hardware that is three stories high, we should realize that this is not always practical.

One misunderstanding that might be going on is that perhaps you are assuming that in TDD one would test just one class at a time. I generally prefer to test a set of classes, i.e., a subsystem of a whole program. In that case the test are about properties of the program that, in many cases, the user could recognize as beneficial. There should be less churn at that level. I suppose some people would call this behaviour driven development instead but I am not so very married to terminology and will just keep calling it TDD. Testing individual classes can also be helpful but if they contain rather simple logic I am not sure writing tests for that is so very helpful. At some point would just be testing whether the processor really is capable of copying values which is not that interesting.


I think as a first order approximation the level to which you can deploy TDD is inversely proportional to how difficult your problem is with regards to computational complexity. The most TDD’able code is a freshman level CS 100 programming assignment. The least TDD’able are things like real time simulations, numerical computing, machine learning, etc. In the latter case, you can TDD the boring parts (reading inputs, simple pure functions, etc) but TDD breaks down quickly if you have a huge state space, side effects, complex outputs, etc.


That depends. A problem may be quite difficult in computational complexity but if you can find simpler less computational complex version of it that can be used for automated tests that can be quick. E.g., test your neural network training procedure with a smaller network which will give a result that performs worse and after that use the thus obtained program on a with a bigger neural net.


It's not about runtime performance. It's about how well you can actually assess your code by matching small and well-defined inputs to small and well-defined outputs.

LeetCode / CS101 problems are almost always like that. In the real world, it's also useful to separate out any tricky "policy" logic into nicely testable pure functions. But you are still going to have a bunch of side-effect munging left over, and unit testing is terrible at that. Tests that only assess their own mocks are a waste of time.


Could you give an example of situations where there would not be small and well-defined inputs and outputs because this certainly does not sound like a common situation to me?

And what do you mean by 'Tests that only asses their own mocks are a waste of time.' Certainly, many side effects can very effecitively be checked by mocks. Now, it could certainly get much more difficult if the side effects were very large and hard to define as you said in the paragraph before that, but I have some difficulty picturing that as a common situation.


> If you write code that is close to the hardware and needs every ounce of performance there might be a point to this. It would make applying TDD both harder to pull off and less beneficial.

It's not really about being close to the hardware[0] - there's many other things that are unrelated to low-level performance tweaks that make TDD or even unit tests a waste of time for experienced game developers.

When someone like Blow or Carmack sees an automated playtest fail, they have a honed intuition for what caused it. This intuition was built up over many years of hard-earned experience. They simply don't need the lower level unit test that says "this matrix multiplication math is wrong" - they'll see a collision fail or a weird scaling thing and _know_ right off the bat the candidates in the codebase where things went wonky and fix it before you can even say "look at the logs" :) Since they get the same results without having to write unit tests everywhere, and especially without writing tests first, it's simply an unnecessary overhead to make them write/maintain it.

I don't think anyone would argue TDD and/or unit tests are valueless, it's just a cost/benefit question. Pretty sure they'd use any technique where the benefit outweighs the cost- and for them, for many reasons in many varied scenarios, it just doesn't.

[0] actually, if anything, I'd bet that the stuff that's really close to the hardware like the core engine stuff (math, memory management, etc.) is the one place they would consider having TDD and/or more unit tests!


If you already know how to code really well, you can easily make a test that tests the things you want to do, and then write code that actually does those things.

If you can’t write code well in the first place, you’ll write terrible tests, and then even more terrible code to make them somehow succeed.

I’d argue TDD has value in the first case, but most teams are more like the second variant.


This all assumes that you know what "the things you want to do" are.

In my experience, coding is the easy part, and pinning the relevant stakeholders down on what they want coded is the hard part. If you're an employee, the hardest part of your job will be in a.) understanding what'll make your boss happy and b.) communicating how to make that realistic. If you're a B2B business, the hardest part of your job is figuring out a.) who your customers are and b.) what they want. If you're a B2C startup, the hardest part of your job is identifying a market with lots of people who want your product.

If you're a maintenance programmer where these things were hammered out a decade ago, then TDD can be a godsend. If you're working with an excellent business analyst / salesperson / cofounder who can hammer these issues out with enough precision that you can trust their answers, TDD is helpful. Neither of these situations describes the gaming industry, nor a number of other very lucrative industries.


If I was arrogant enough to consider myself a good programmer after 30 years of doing it, I would suggest as humbly as possible, that perhaps TDD is something better coders can do in their heads and 'skip over'. How do we know what API to build ? Because the code needs it. How we know the code works ? Because I just ran it.


Which comes right back around to the idea of who you're writing code for. I suspect much, possibly most, of the time it's not purely for yourself, but for your team, or other Co tributes to the project, or thr person that has your job a few years from now, or even you a few years from now.

Just because you have the entire system in your head right now, will you when you revisit it a year later to add a feature or fix a big? How much time will it take to internalize completely? Tests allow you to know less about the system overall and still make assured changes to smaller portions of it because prior you (or someone else) has spent the extra time to make sure any mistakes are caught early and easily.

The go-to counter to this is that you should really know what you're doing and understand the system you're working in, but that's often not only infeasible, but impossible for one person (how well do you know all the libraries you're using?). There are plenty of codebase out there where I think no one person can internalize it all and understand it well enough to always make food choices, and since you'll have to choose what you want to remember, a helpful tool to make sure you don't violate some important facets of how it works is useful.

Now, much of this is about testing in general, but if you think about it, there's a lot similar about writing new code and coming into a large chunk of code that you need to alter but you have limited understanding of or poor recollection of. The same things that keep you from violating how it should function help your design a coherent functionality in the first place.


I only know some of Blows talks. Is there any written advise from him?


Copy/paste of code might have saved my team some work. We had 2 versions of the same tool built by a third party like 10 years ago that works within a major application the business uses heavily. One version of this tool had been rebuilt with a 64-bit version. The other has ceased working at all on Windows 10 which we are deploying to all users on new computers.

They are functionally the same tool beyond one config file setting defaults and another config file managing access to components of the tool. After playing with the differences, we iced the broken app and updated the setup KB with the config differences.

As we upgrade the core app that the vendor tools work within, we are glad this headache had a quick fix but is being cut as the new version of the application natively supports what the tool did in the old version.

That being said, don't use this as a reason for doing that or at least document that for the future owners of the tool so we don't waste what little time we have troubleshooting functionally identical software.


"There are no golden rules, it's just hard work."

But there are good and bad principles. Like avoid copy paste. But I would never say, never copy paste code. I do, when I hack something together quickly and if it stays it needs later refactoring. No problem there .. only if the refactoring part gets forgotten bugs sneak in.

Also the advice of less is (usually) more.

Witing code is easy. Writing good code which is working, readable and maintainable is hard. But as little as code also goes against readability.

So like many others here said .. dogmas are stupid. There are different szenarios and just use what works for you.


> Also I think TDD is frankly ridiculous; I've seen many teams spending more money on gold-plating test lines of code than actual code.

I don't see your point. You'd still have the same problem if you had a team gold-plating some other code instead of doing their job. In fact, without TDD your problem is far worse because you don't have checks in place to evaluate if some code works as expected.


I like to do TDD (Test Driven Debugging). I have found it to be a bit lackluster for development (we write the feature, then tests up to 100% as snapshots), but for debugging there is nothing better than replicating the bug with a test, making a small change we believe to be the correct answer, and it works (or it doesn't).


I call this DDT (Defect-Driven Testing) :)


It's still TDD but a specific subset: issue/bug reports instead of feature requests.


That's why I deploy straight to production. Crowd sourced QA testing! </joke>


It's actually not that far fetched. With some environments/orgs it's really next to impossible to test any other way than to push a canary release on some small set of users and start monitoring the logs.


agreed! all rules have limits, even the copy/paste one. nothing substitutes accuracy and labor.


I like TDD but it should be adapted to the project. For some projects, I only do integration tests, for others, I only write the unit tests for one or a few of the more complex modules and test the whole system manually then if needed I will add more tests later after the project code has settled.

It depends on a lot of things; such as what kind of complexity the project's modules are prone to (algorithmic complexity or integration complexity), team size, security requirements and reliability requirements.

It's very important to keep the testing as minimal as possible at the beginning. Sometimes minimal means a lot, but you need to be able to justify it because tests (especially unit tests) lock down interfaces and functionality and this can go against agile principles. If your project is innovative and requires that agility.

After 15 years experience coding on many different projects and companies, what I learned is that blanket statements are often unhelpful. You need to critically evaluate every case as unique and make arguments about why and how your case differs from the norm (and it usually does).


Agreed, I don’t write a lot of tests until happy with the design. When that solidifies it’s time to get cracking on the unit tests.


I have a variant on this. I start with a bullet-list of functionality I think I’ll need. Then I write one test each for items that could impact my architectural choices. Once I finish that first pass I go back and flesh out each item with multiple tests as necessary.

Flushing out the architecture early is the trickiest part for me. Occasionally I miss a seemingly-simple feature that ends up requiring an architecture change. If this happens late in the process, I may pay for the oversight with hours of refactoring.


One more advice for young programmers is - make the effort and learn the domain you are programming in. If you are writing finance software - use the opportunity and learn finance/accounting. If you writing retail software, learn about retail. If doing some network automation - learn about networking. I neglected this early on and picked up only enough needed to complete my programming tasks. Now regret this as could have ended up with more options across industries I worked in. Even maybe start your own business if you become expert in some domain.


From my ~15 years of experience, if there is just one advice I could give to someone starting their career as a software developer: irrespective of whether your work is exciting or not as much, it is a joy in itself to keep working on improving your craft, which is writing software systems. Excellence is always a moving target, but if you stop working on your craft, the joy you will experience as you become senior and older, will keep decreasing.


This is my experience as well. At this point in my career, craft and quality is about the only thing that keeps me from ejecting.


Nice.

The way I heard it from an English professor:

Writers write. The only way to become a better writer is to write more.


Started coding in 1987. Have no idea what I’m doing. Regularly panic about how little I must know. Can’t figure out if people giving advice and writing books are genuinely a lot better than me, or just more confident.


Haha, same here (but I started -81-82). Love programming but get my kicks out of running code in production, not written code in a repo, so I'm always in a hurry to ship and usually don't have time to learn the new, shiny tech and practises. Which makes me feel out of date, ignorant, impostor:ish. But then I realise (or tell myself?) that I'm shipping working functionality way faster than most, and that I can't be so bad. I think.


It's very funny... I bet you have no problems explaining complex problems to the Junior and Senior developers, and receive high marks for knowing highly technical details.

I wonder if there is a correlation between amount of knowledge and Impostor syndrome kinda like the four stages of competence. At the beginning, you may think you know a lot but as you get further along, there's less and less confidence that comes with more knowledge.


The one point in this article that I will challenge relates to duplication of logic. The moment you've come across two similar scenarios that on the surface may benefit by a refactor and consolidation of logic, don't just instinctively jump to it. Refactoring to a universal model can take a great deal of time and effort. Days, if not weeks, may pass before your universal design is ready. Consider the cost of time and effort. Others are waiting for this work to be finished. Time is not on your side. Make sure you have a damn good business justification for consolidating designs and fight your biases.

I am advocating for knowing the costs and benefits of your refactoring decisions. If you can't predict what the costs are, take that as an indicator that maybe you should avoid refactoring, just yet. Consider as a litmus test discussing costs and benefits with people whom it will affect and try to reach an agreement. If you dread the thought of having that discussion, you probably ought to avoid refactoring. If you can't argue in favor of your refactoring idea, it's not worth the effort.

Also, consider that just because you can re-use flexible designs doesn't mean you will. You may not be the best person to consider the probability of re-use. Assume that it's unlikely: YAGNI (you ain't gonna need it).


Started coding in 1988. What I've learned, it seems, that there are many styles. That they are worth learning. They are part of the art we practice. There are rules that these styles bring. And compromises.

Zen of Python. The exceptional beauty of John Carmack. Rebases of Google Style. Dive into a Linux Kernel. Very Objective Apple Style. Infinite pains of TensorFlow. Simple magic of SQL. Clarity of PyTorch. Unique ways of Prolog. Formality of Antlr. Frictionless Microsoft Style. Completeness of Theano. Abstract Syntax Notation One. Eldritch style of Michael Abrash.


Ditto methodologies.

Maybe analogous to mixed strategy equilibria from game theory.


Regarding avoiding copy-paste; I'm a JavaScript developer, and from time to time, I run `npx jsinspect src` in the root of my projects to detect copy-paste. The results are always interesting.


I've got slightly mixed feeling on copy-paste because sometimes it leads to lots of dependencies--some with conflicts--and you would have been better off writing if (s != null && !s.isEmpty()


cool, if this is reliable enough to be part of CI process it will be super useful


Never ran into a problem with it myself but ¯\_(ツ)_/¯


I'm at a point where I think that "(2) Code is a liability" is a fundamental revelation that separates developers. It acts as a proxy for so many things, including some from this post:

- Code duplication should be avoided, but not if the alternative is complicated or highly-coupled code. Duplicated code is increased liability, and complicated code is increased liability. Sometimes two similar but separate functions is just easier to read, use, test, and maintain; assessing the "liability" helps you make the best decision.

- Tests are good because they decrease liability of other code, but they are themselves are a liability, so you can go too far (as many other commenters note about TDD).

And "Code is a liability" is not a silver-bullet. It isn't specific or prescriptive enough, and that's a good thing. It requires experience and effort to understand what that means and to make use of it on a case by case basis. But I think it does point you in the right direction by posing the right questions and focusing on the important trade-offs.


> Code duplication should be avoided, but not if the alternative is complicated or highly-coupled code.

100% agree.

Along these lines, avoid applying DRY principles to "incidental duplication"; You can end up coupling completely unrelated code if you are blinded by removing all duplication.


This is so important! You must have separate functions, even if they contain the exact same code, if they are semantically different.


The stack is to deep. Nothing is easy anymore. The technology count is staggering. I'm close to retirement. I'm looking forward to pursuing my personal programming interests while I work some mundane, no responsibility, minimum wage job.

Programming is a ridiculous career path.


Type 'research programmer' into google jobs or zip recruiter or something, go take a massive paycut to work for a cancer lab or other research area. No bullshit whatsoever, best job I had was as a 'Clinical Statistical Senior Programmer' in a lab with a bunch of doctors, paid barely more than a garbageman at $55k/year, but I actually enjoyed going to work where everyday you're digging into The Art of Computer Programming or reading some arxiv paper, you have no boss/CTO overlords pushing shitty abstractions like Kubernetes except 'this idea has to work, do whatever you want to make it work'.


I interned at an observatory as an undergrad, which was a similarly fun position. Plenty of opportunity to architect tons of smaller services or tools or contribute to open source astronomy libraries. Just about every facet of computer science was used in one way or another, and I got a taste for almost all of it. Databases, networking, front and back end, real-time computing, among others. The only thing I didn't really learn was how things work in the corporate world, with BAs and product owners and bureaucracy.


Amazing that it takes returning to the academic research world to get back to actual practical no-bullshit programming.


> Programming is a ridiculous career path.

You have blinders on. Programming is a far better, far less risky career, and pays far better than many very popular career paths (architecture, teaching, sports, arts, research (be it science, computer science, or humanities).

Context:

- sports and arts are obvious

- I did my stint in research, and programming (even at FAANGs) is far less stressful and far easier to find a good job in

- my wife worked in architecture / works as a teacher


Yeah, you might be right, but I fear this might just push the OP closer to utter devastation as he realizes the working world is fucked everywhere.


There are grades of grey.


Coding school bootcamp instructor could be a thing. It's basically the same as a teacher.

Or teacher at private schools also make more.


It is indeed ridiculous.

I am in my early 30s and I am aggressively saving as much as I can while living very frugally, to hopefully retire multi-millionaire abroad or get a lower paid job that won't stress the life out of me before I turn 40. Hopefully I won't become unemployed before hitting my target, or die of a stress-related heart attack.

I am not completely disappointed about my career choice because it allowed me to save a lot of money (see note), but boy, it is an insanely ridiculous career. Very frequently I wish I pursued something else, there is just way too much and fresh new crap keeps coming, every week. Can you believe 4 years ago almost nobody was talking about containers/Kubernetes? Just to name a random niche. Can you imagine what will happen in a couple years? They say "it's all the same, it's a cycle, you already learned it": that might be true if you are a manager or a PM and you just need a very superficial overview of the ecosystem, but if you are a senior engineer you need to master your craft, so even if a technology has been reinvented you need to spend hundreds of hours learning the details of this new incarnation, that's literally what your employer expects. Ask an engineer if being proficient with VMWare is enough to get up to speed with Kubernetes just because one is the sequel in the virtualization scene... exactly.

Also add the effect of globalization: my company hires talent from Eastern Europe (not consultants, entire teams with local management, product, ...) who produce insanely high quality software products at a ridiculous pace, exquisitely documented. Those people are wicked smart and on top of that work 20 hours a day. I am surprised software engineers are still hired in the US, where working 10 hours a day is already considered unhealthy. I feel the same way as a brick and mortar store that's going to get pushed out of business because of the more efficient and cheaper Amazon.

Note: It also helps that I choose to live in the super expensive Bay Area while not wanting kids, so I don't have the same spending requirements of people with kids and can bank the spread that most people would have to spend on schooling, nannies, good housing, ... I rent a studio for $2k in a ghetto-ish area, and that's 70% of my expenses. If I had a couple kids for whom I needed to provide good housing, nannies and private schools, I would be spending all I earn and there wouldn't even be an end in sight to this madness.


> Those people are wicked smart and basically work 20 hours a day.

What makes you think they work 20 hours a day?


I don't think, I know it. I wake up in the morning (PDT timezone), and they have already produced and committed a massive amount of work, or tackled complex architectural design projects. On top of that, they stay pretty much active on Slack during my entire work day, until their local midnight/1am. By the time I go to sleep at midnight PDT, they are already committing code or following up in previous IM discussions (it's their ~7-8am). So, effectively they work twice my amount, and as I said they are very smart and motivated individuals, so they produce more than twice my amount.

If I were to start a company in the future, rest assured I would do 100% of the engineering hiring over there, after having seen their work ethics and productivity levels.

As an added bonus, they are really not in the mentality of profit sharing/equity compensation typically seen in the US, so you can get them with a cheap salary (think ~$50k/y), whereas a local senior engineer would cost you ~$250k + equity, and would demand a good work-life balance and probably resent you because your free lunch is not as good as FAANG's.


Eastern European here - it's interesting to read how this looks from the other side.

But yeah, $50k is enough for many to throw work-life balance out the window and at times pull all-nighters - I should know, because just recently I quit a job where that unfortunately was a habit of mine - all for approximately this pay.

I don't think it's sustainable, but younger people increasingly choose this way of life, because it lets them obtain status symbols like a MacBook Pro or a lease for a Mercedes C-Class. Also the overall mindset is that this is how people live and work in Silicon Valley - regardless of whether that's really the case.

But here's the kicker: some in my area are saying that we're in a bubble and developer salaries surely must come crashing down eventually. Others think that our only advantage is low cost.

Personally I share neither of these views. It's all relative and as long as real estate prices in Silicon Valley remain absurd, developers will be compensated generously. Perhaps even after they come down - if they ever do of course.


As someone who's heard nothing but good things about Eastern European programming shops, looks like y'all's salaries are only gonna increase. Perhaps you should take some bets on your market with that assumption!


> But yeah, $50k is enough for many to throw work-life balance out the window

It should be pointed out whether you mean net (take home) pay or total cost for employer. The latter is usually about twice the former at these salary levels (at least in some parts of Eastern Europe).


In Russia a self-employed contractor would only pay 2%-6% in total as taxes depending on where they live.

Of course, for that they will only get minimal pension and standard medical insurance. They have an option of voluntarily contributing additional money to the state pension fund to get increased pension.


In the U.S. too, really. Don't forget health insurance.


As a self-employed contractor in Poland I can get away with retaining ~75% of the sum of my net invoices(sans sales tax, which is transparent B2B) as take-home pay.

For that I get rudimentary health insurance - only really good enough for a hospital stay free of charge.

The tax rate is a flat 19%, so I would wager that the tax wedge is likely smaller in eastern Europe than in the US.


Thanks for putting a good word for us. I work mainly for US clients from Serbia, and while I do occasional overtime, or 60-70h/w before a deadline, those should be and are exceptions. Even you as an employer don’t want chronically overworked employees.


Can confirm this. The best engineers I’ve worked with have been remote from Poland and Russia. Insane quality of code architecture


except for the old USSR-taught folk who can't let go of their waterfalls.


Any names you can mention about these work ethic programming shops?


These are not programming shops, that’s the whole point. These are basically full time employees hired on payroll, who will care about your codebase as much as any other member of the team. Programming shops in my opinion never work, there’s always the mentality of just building crap and throwing it off the fence.


At least one of those A’s doesn’t have a free lunch :(


I think both do not.


> As an added bonus, they are really not in the mentality of profit sharing/equity compensation seen in the US, so you can typically get them with a very cheap salary (think ~$50k/y), where a local senior developer would cost you ~$250k + equity.

ugh !!! yeah, you better be scared for your own job and good luck hiring those devs in the future...its more likely they'd be hiring you.


I can relate. I'm in my mid-thirties. I've been working as a web designer, which then became a web developer, then front-end developer and now front-end engineer - in the last 13 years.

I've found the same issues as yourself. Things have become more complicated and yet I feel like my position is seen by a more "informed" upper-management as a kind of commodity.

I once had a manager tell me, during a pay-rise request that developers where a dime a dozen. Fair enough, that's probably true.

I've been thinking so much the last few years of how to transition into a field with less technological noise. I feel like most other people I work with, their only real skill is in communication - as in they don't have any strong hard skills like engineering.

I'm in the same boat as you. Unable to get married or buy a house, or have a family due to living expenses. I mean the average price for a 1 bedroom in London is uppwards of 350k, I've been saving for 13 years and barely have a fraction of that - I dont know who is buying these houses...who can afford them? or are people willing to gamble with their savings and their future?

Also, I feel like some technical managers dont do a great a job of protecting our field. As they are the ones primarily responsible for communicating with upper management. Often times, they are downplaying our abilities and our position - they dont realise that their accomplishment in terms of generating efficiency is often compromising our job security. Ultimately most managers dont care how hard it is to be a front end engineer or full stack dev or whatever, they just look at numbers, and in many cases they see us a financial burden or as a commodity. So, you as a technical manager need to bare that mind during your crusade to automate and optimise x,y and z.


Mortgages. You don’t save up 100%, but rather 20.


I hope you can make it work. I am nowhere near the path to saving enough to retire soon. Although I'm still holding onto the hope that I could build my own path start enjoying programming again with the BS and pointless stress removed.

To make that work though, I'm done with the interview circus. It's been such a distraction over the past 3-4 years, that if I wasn't prepping for interviews or staying up to date with the waste of knowledge that they test for in tech interviews, I could've built a successful business already. Now I'm choosing to focus my time there.


These days the effort required to get a well-paying programming job is more or less equivalent to the effort required to start your own business. If you can start a business. It's worth a shot. Worst-case scenario, the business fails and you are back on the job market but you can use your startup as a demonstration of your capability to deliver on projects that require time and dedication. You will also grow and learn a lot and be in a much better position than your competition.


I am in your boat. I've been in the interview circus for 6 months. I'm quitting, I'm going to start something.

I still have the luxury to do so. So I should take my chances because industry sure isn't taking one on me.


My suggest is save a nest egg, then get a public service job that offers a pension. If you want to know why realize how much you need to save to generate the income a pension provides. Especially since US and EU central banks have decided to punt and just continue to print money.


To pose a frank question: are your own skills equal or better to the heavy hitters on your European crew? If not, how do you know you're not being snowed by a lot of pomp and ceremony to do regular things in an impressive-looking way?


That's right, the pace of work is constantly accelerated by competition, you feel you can never quit, and you aren't paid an amount that for your location _isn't biologically sufficient_. We need unions, we need an end to competition, and we need to take control of the companies ourselves and overthrow the bosses.


Every job sucks though, every job is ridiculous and riddled with inefficiency and bad management. You can view the need to constantly update your skill set as a negative or positive. It is almost a bit of a privilege that we cannot rest on our laurels if we want to stay relevant. Humans thrive on some stress, too much is bad, but none will also ruin you. Granted I am only 8 years in, but I still feel semi greatful that part of my job is exploration/learning.


It's so interesting to read different people's perspectives on here. I very rarely comment on here, but it's just wild to me how wildly different the conclusions we can all draw from looking at the same set of pretty simple factual data are.

I've been working in this industry in a serious way for about 15 or 20 years? So not exactly close to retirement but I'm not straight out of school, either.

> The stack is to deep. Nothing is easy anymore. The technology count is staggering.

To me, it's insane how much easier literally everything is, and how wildly more productive people can be.

I remember when I was young, first of all, it was impossible to just get access to technologies so you could learn them. Compilers cost money. Databases cost money. Operating systems cost money. It was hard to even get to the point where you could mess with stuff, for financial reasons. Now, to a first approximation, there isn't really a tier of basic software you'd want to develop on/with that you can't get essentially for free. It's not like they're running some totally different "production-grade" operating system or database at the big tech companies, or that the well-funded machine learning labs have access to some sort of special super computer with totally different characteristics than what you have access to. They basically have the same shit. That's bananas!

Then, there's the utterly massive revolution in programmer productivity that has been caused by the internet + automatic memory management. In the olden days, there was very little code reuse, for several reasons, but one of the reasons was that it was too hard to pull in random libraries and start thinking about who would own the storage for the error string they wanted to return to you out of every single api.

Everyone bitches about how javascript programmers think nothing of adding "left-pad" to a package.json along with twenty thousand other dependencies, but the fact that you can do it is nuts! And it makes everything SO MUCH FASTER AND EASIER.

The other day I was just curious if I could point my webcam and my whiteboard and have it pull in the lines I drew on the board with a marker and save them as some sort of curve in the computer. I have no experience in computer vision or anything like that. I don't even really know Python. I was able to get this roughly working in like half an hour with Python + OpenCV + some driver that already made my camera work + some tutorial that came up in DDG. It's like being a freakin REAL LIFE WIZARD.

Take building Android or iPhone apps. I remember trying to write apps for the Palm Pilot a thousand years ago. Just getting the toolchain to work at all could eat up a couple days. This is reminding me what it was like to try to write a win32 app w/ OWL or MFC, starting from the auto-generated code. Anyway, now, you get a tiny computer with shockingly powerful cpu+gpu+a zillion radios+gps+multiple cameras+gyros+compass+accelerometers+gigs of storage + gigs of memory + a super high rez touch screen. You can assume that to a first approximation, everyone in the high-GDP/capita countries has one of these, essentially everyone in the mid-tier, and the low-tier is growing at a shocking speed. Further, they are all exposed to your code through a more or less universal set of interfaces so you don't really need to worry about drivers and hardware support hardly at all. You want to run some code on here? Ok, you can distribute it for free over the air, and you can write your code in an automatically memory-managed language with a freaking gigantic standard library. Oh also the SDK is free. And there's an emulator that runs on x86. Oh and there's a visual multithread debugger that you can attach to the process...from your computer..remotely. Oh you don't need a dev sdk, all the normal units everyone has, those just work.

But you want to build a network application? Something that needs to talk to some sort of service? Well the good news is you can get a database, an operating system, and incredibly powerful web servers and application servers, and they all cost nothing. In fact, they're already installed, on the computers that you can rent by the minute, and the lowest tier if you just want to mess around? It's free.

Mainly what I feel like is extremely JEALOUS that I wasn't born later. Imagine all the cool fucking shit you could have built as a kid instead of wasting your life trying to find a cracked version of Borland Turbo C that worked right. Choosing between the horrendous performance / security / etc of perl cgi-bin vs. the unattainable per core licensing fees of a "application container" or whatever those special jvm things were called. Oh, here's one: DATABASE FREAKIN DRIVERS FOR JAVA. jdbc drivers used to cost money! And they sucked shit!!!! ahahahahhahaha.

I agree that programming is a ridiculous career path because I can't believe ordinary-talent-level programmers in Silicon Valley get paid into the low seven figures a year to essentially work on their hobbies. This has to end at some point. I can't believe this is a real job. It's like your whole life, you love legos, and you're always trying to build bigger things out of legos, but you always run out of the pieces you need. And the someone comes along and says, "Ok kid, here's the new rules. Legos? Legos are free. Every single type of lego, in more or less unlimited quantities, you can just have those." What's the catch? "The catch is you get paid to play with the legos." Wow thats crazy but it must be like...a barely survivable wage. "No in fact total rubes who just graduated from school are gonna get paid 3x the median income for a family of four in their first year out of school. People who've been around for 10 or 20 years will make like a million bucks or so." But only in like weird big risky situations, right? "No that will be the deal if you work for the most stable boring companies where you get six weeks of vacation and all your meals taken care of etc."

??? How is this possible?!!?!


I would disagree with the simplicity you say we have today, you can create your first angular(insert other magic framework here) in 1 hour. Then 1 year later most of the time someone like me needs to get involved and find a big mess of shit, everything is horrible inefficient(like you move your mouse and 100 functions will run) you are asked to replace a text input with a numeric spinner and now you need to find some library from a large number of candidates, dropt eh incompatible ones, drop the ones that are probably garbage made in 1 day and is broken in corner case, import it, wrap it and integrate it in the giant maze ...

All the simplicity brought by the magic of new abstractions will go away at the moment you are forced to understand the actual thing behind the magic, realize that most placed that used that magic were probably simpler to write without the abstraction (ex all the both ways data binding in angular would have been cleaner and efficient to do it with events though it would take you 5 more lines of code)


> Mainly what I feel like is extremely JEALOUS that I wasn't born later. Imagine all the cool fucking shit you could have built as a kid instead of wasting your life trying to find a cracked version of Borland Turbo C that worked right.

Amen. I started programming in Turbo Pascal and Z80 assembler in the 80s, and never laid eyes on a manual, book or documentation about them, except for a few articles on assembler I read from magazines in libraries. I had one beginner Pascal book. Now I can instantly download any paper or book, then instantly download anything they reference. Most software is free too.. I love it, but it's hard not to imagine how very different my childhood would've been. I came across a copy of Zaks' fabled Programming the Z80 in a bookshop a few years ago, which I'd never seen before in person, and I was like My god, I would've given a leg for this 30 years ago.


Yep, sure, you can glue and duck tape a cool demo in two hours. Somehow that doesn't seem to translate to actually building and maintaining real products, where the demand for developer time seems virtually infinite, and everything ends up released in a buggy alpha tier state that never gets fully fixed to a mature state.

And somehow my experience using computers & software hasn't gotten dramatically better in the past two decades. Most of the improvement can be attributed to long term hard work (software that I used 20 years ago is today easier to configure and/or has been replaced by something that is easier to configure, and it's not because they rewrote it by throwing some glue and libraries at python over a weekend) or improvements in hardware.


You’re partially right. The problem is the pace. Right now there is a bit too much noise from that. But yes, i share your view that in many ways things are better.


Interesting. I'm finding the opposite to be true. While it is easier to "get something working in Python in a half-an-hour", it didn't get easier "to release something that people want".

I'm sorry about your copy of Turbo C. Mine worked fine. Still enjoy pudb.


Very good observation. While the programming tools and infrastructure have improved tremendously, so have user demands increased. To make money today, you can't release yesterday's functionality. SW development keeps moving to higher levels of abstraction, and not everyone likes that (I don't).


Uh? Don’t talk about the pre- or early internet release process! Copy to floppy disks or burn CD, send by snail mail. For your little startup, hope you didnt miss any significant bugs before you payed for pressing 10 000 CDs at a quite significant cost. That was if you already had a customer. Get your software known? Well, I guess I’ll buy an ad in a relevant magazine.


But in many areas there was less competition by orders of magnitude, right?


The whole market was much smaller. So maybe much less competition, but also much less market when only hobyists had computers at home and no one had smartphones.

The only "golden age" I can think of was the internet bubble when investors were crazy and threw money on anything that had anything to do with internet. Some kids made really good money "programming" HTML.


Fair points, thanks. (The question was only semi-rhetorical, as I'm only an observer rather than an insider, so I appreciate the genuine answer.)


Im 25 years into my career. Excerpt for a few years during the dot com bust, I’ve been loving it. I’m looking for a new job right now, have 6 onsites scheduled and 10 other companies I want to talk to if I don’t get an offer I like. I’m also lucky enough to favor jobs close to where I live.

I feel very very lucky to have chosen programming as my career.


I've only been in the industry for 6 years, not counting my student programming job, and I've started to take tutorial-style notes for myself for all the technology jobs. For a part infrastructure, part software engineer person like myself, you're right there's a massive amount of tools you need to know and be a semi-expert in. There's no way I can hold it all in my head. I'm using mdbook and a very simple git workflow to build up a nice searchable notebook, so when I learn a new topic, I can learn it in a detailed way and take some notes that are tailored to how I re-learn. It's taken a huge burden off of me to constantly have every technology in my head.


> Nothing is easy anymore.

I don't agree with that completely.

I do agree with the fact that the half-life of programming knowledge is far too short.

We flip through languages, technologies, etc. before anybody actually gets truly proficient with them.


Some advice that I heard awhile back that I'm glad I followed is to devote different percentages of your learning budget towards technologies with different half-lives.

Stuff that has an incredibly long half life like SQL/Database internals, or OS fundamentals, networking fundamentals, CS fundamentals will serve you well basically your whole career. So even a little bit of time dedicated to these over the long run is time pretty well invested.

A larger portion of learning goes on to the treadmill of technological churn, but that's either stuff you are learning now in order to get your next job working with it or are learning now as a result of getting a job that gave you the opportunity to work with it.

The middle ground is getting deeply familiar with libraries/frameworks that have a medium term half life. Usually that's some form of data access.


30 years in professional programming for me. I agree, the stack has gotten so huge that it's impossible to master even most.

But what a great career for constant learners! I'm glad it's the way it is.


What's the point of the constant learning if it's going to be obsolete in 5 years?

That's what I can't stand about the cult of "you get to learn so much at FAANG or a startup" -- at some point in life, you need to spend less time learning new things and start applying the things you have learned. I've been drinking from the proverbial water hose for 20 years. It's tiring now. At some point you need to be able to find the permanent things and reject the noise.


> What's the point of the constant learning if it's going to be obsolete in 5 years?

Well... that's the point, really. If it didn't change then you'd be able to master it all.

The constant change is the motivation for constant learning and (hopefully) improvement.

There's no point in me trying to make a longer point of it. There's an sort of an eye-opening account from someone coding for years about how our situation as programmers has improved over time.


Are you in the web by any chance?

There are plenty of things where the changes are slow, and 20 year old frameworks are still used up to today.


If you take deep stacks upon yourself, and then hate it, yes, that's a ridiculous career path. Personally, I took iOS development. The tech stack is actually quite shallow. And there's regularly new things to learn. Programming as a career brought me intense joy and wanting to go working.


No.

Your responsibility as a seasoned programmer is not to code !

I'm a 20+ yr experience programmer and still hold the title Sr. Software Programmer and no. I say no.

Yes the stack is deep but it is unnecessarily so. Yes, there's layers and layers ...but it is your job to educate the less experienced why less is more and why there's is always a tradeoff to be made.

Computer science hasn't moved a whole lot[1] in terms of problem solving since the 80s. Computers have just become faster. The best ideas we have today are still rooted in the principles of the 80s. They are just a abstraction level higher.

So no. I refuse to accept the premise that Nothing is easy anymore.

[1] when considering fundamental theories.


I thought we're not meant to call ourselves programmers

If you're doing it for any reason other than to gain transcendetal enlightenment you're in the wrong space


So what job is intellectually doable, people-based and has a high upside (eventually) and doesn't have crazy work requirements?

I'd like to do that job, but last time I checked, there are none.


When people say TDD does that imply writing unit tests before writing code?

Or do they just mean doing unit tests as you code?

I'm never sure when I hear this term because the benefits described (thing's just working first try) seem to apply no matter what order do it in.


Pure TDD is, as the others have said, to write the tests first. If you aren't writing code to fix a broken test you are doing it wrong.

TDD, though, is really a learning system. The main point is writing the code with testability in mind. You can write them together and get the same results as long as you do that. It keeps the code loosely coupled and the APIs clean.

So, IMO, you can still call it TDD as long as you are writing the code and tests together and are writing the code with the tests in mind. You can do that with the pure TDD or with some variant as long as the result is the same. I guess it depends whether you are referring to the general style or the specific learning methodology.


Unless you're writing super functional stuff, I don't understand how you could reasonably apply TDD when developing user interfaces: I target web, but the same could hold for mobile and other devices.

Because, what exactly is the contract? Do I essentially write integration tests to the effect of "If I click input element with css class .username, type 'x', and then click input element with class .password, then type 'y', then click the button with class .login, the page URL has changed from <mysite.com>/login to <mysite.com>/profile?"

There are no clear API interfaces for UIs. A user wouldn't really care that the button had class .username on it, for example -- that just gives us some way to identify the correct DOM element to perform the e2e test against. And some day, a developer could rename that class name, and it'd break the e2e test.

In other words, I can't assumption what TDD looks like for user interfaces, without the whole testing process seeming extremely brittle. Contrast this with, if I call some REST API, I expect this response body. It's quite concrete.


I don't think it is that much different, just maybe in proportions. In pretty much every application I write there 2 types of code. There is the guts or business logic that is written in the style of a library making it very testable. The other is the top level code, the 'main loop'. This is just there to hook together the library code in as short a way as possible. I think UI code differs in that it tends to have quite a bit more of this top level code that is basically un-testable except for very high level integration/scripted tests and QA testing. But UI code still has some amount of the library/logic code that can be written with testing in mine. I think this speaks to the myth that 100% test coverage is desirable and good. It is not.


> I don't understand how you could reasonably apply TDD when developing user interfaces:

Unless you are trying to test whether clicking on a button or link fires an event, UI tests aren't exactly in the domain of unit tests.


> It keeps the code loosely coupled and the APIs clean.

Having seen some of the interesting designs of less-experienced devs, I disagree that this is a benefit of TDD. TDD may help straighten out some of your thoughts if you already have loose coupling in mind, but it won't work if the dev doesn't already have some experience there.


TDD means writing tests before code

At a very high level: Write the most basic case and it will fail (no code written). Write code so the test passes.

Now write another test that will fail. Then write code so that tests passes.

It forces you to write code in a way that is testable and you catch any regressions if previous tests fail.

In the end... you shouldn't.... write any code that isn't required to pass a test.


I figured the value of TDD was to settle on the higher-level functions and APIs before dealing with the nitty gritty.


As a veteran (20+ years) programmer I've yet to witness a project where tests are written before the code for the entire duration of the implementation. I personally feel TDD is the snake oil you need to sell to get your team to do write unit tests for max coverage.


> As a veteran (20+ years) programmer I've yet to witness a project where tests are written before the code for the entire duration of the implementation.

Did you made any effort to adopt and enforce TDD? Because, like any good practice, unless someone drives adoption then things tend to stay the way they are.


Of course I did but like any other process if there has to be complete adoption the benefits need to be clearly visible and significant. The problem with pure TDD (and remember that's what I am talking about - writing / updating tests before writing any code), is that there are diminishing returns as soon as the code base becomes a certain size. In fact, at some point the costs flip around to be higher than the benefits (ie: updating code before tests becomes significantly more efficient than the other way around).

In my experience, it has always been easier for multiple developers working on a code base to write the tests once the implementation has been fleshed out and a significant portion of it is 'ready'.


Doing TDD one should have a strong preference to write the test first. But really, in this discussion, which frequently happens on this forum the most important thing is often left out. What people are not talking about is that there also is a refactoring step in the cycle. To qualify something as TDD or not TDD the presence of the refactoring step is actually the most critical.


Some people take liberty with the term, but in general it's failing test first, write code to make it pass, as in the development is driven by the test like the term sounds: 'test driven development'.


Hopefully, when people say "TDD", they refer to the very specific methodology by that name which goes something like this: before making each change, writing a test which fails, and then making it pass with the minimal amount of code, followed by refactoring which keeps all tests passing. No, not exactly unit tests.


Others are right that strictly, it's tests before code, but I think the win from it is that in designing your system, you design it to be testable, spending more time thinking about APIs, and then do go and try to break it.


My interpretation of testing is documenting your code.

If you deliver a set of scripted requirements (Read documentation) prior to the feature - that’s TDD.

If you change the code, you break the documentation.


Some of the things "I've learned in 25 years":

After arguing with a friend who claimed to have discovered a bug in `malloc` because his program crashed (that was a long time ago), I figured that there's a hierarchy of probable causes of bugs, from most probable:

1. The code I wrote [you must understand only your code]

2. A 3rd-party library/dependency [you must understand your dependencies]

4. The compiler [you must understand the compiler-generated code]

5. The OS, or OS-provided library [You must be intimately familiar with OS internals]

6. CPU, or other hardware-related [You must be intimately familiar with how the CPU or other HW works]

It's basically arranged according to how many other programs are using the given facilities. Returning to the "buggy malloc", I did not dismiss his theory outright just pointed out that if malloc were bugged in such an elementary fashion, that no other programs could work, hence there was overwhelming probability of his code being bugged, not malloc.

---

Just like security features, error-handling cannot be bolted on afterwards. Robust programs begin with defining error-handling strategies and expected outcomes in case of errors and building the rest on top.

---

I could probably go on, but... meh.


The bug in malloc() is an old classic.

From my experience (about 20 years):

1- Obviously...

2- Some libraries are more reliable than others.

3- Missing, are you working for Valve?

4- I sometimes encountered compiler bugs, but it was things like compile-time crashes, never bugs in the generated code.

5- Never happened to me. Or at least no bugs that weren't documented (there is a BUGS section in manpages, read it).

6- Actually more common than the previous two. In fact extremely common in the embedded software business. But even with PCs, bad RAM sticks and things like that happen.


And sometimes. Yes. It is just a bug in a malloc. And you can fix it. And it wouldn't be there any more. And cool thing, you can even brag a bit that you've fixed bug in a malloc.

By the way, there was sort of a bug in a malloc I remember fixing. Now, when was it. Ah, on 7th October 2006 ;) https://gitlab.flux.utah.edu/gtw/git/blob/dff9d65dc61af8a00a...


LOL reminds me of the time at work when I was helping a fellow SWE deploy his code to a new platform. It didn't work. He immediately blamed the it on the fact that we were using a new platform. I then noticed that we were also deploying a newer version of his code onto this new platform, instead of the old code that was known to work on the old platform. I asked him how he knew it was because of the platform (#5 or #6) and not the newer version of his code (#1).

Turned out I was right. And this was his code, not mine. The best part? He had been on the team longer than I had, and this was at Google of all places.

And people wonder why I have lost interest in the field...


Does anybody have some words of wisdom / articles / tutorials / books on TDD for your typical web applications?

I am asking because for me, writing tests before the code is a natural choice for pure functions where I can easily map inputs to outputs. I cannot imagine writing that kind of code without tests.

But the usual task is gluing together tons of components and 3rd party calls (e.g. starting from an HTTP request, doing some transformations, calling some APIs maybe, persisting stuff in the database).

All of this depends on tons of side effects and hard to mock components. The only test I can write there easily is a full contract that mapping the HTTP request to HTTP response.

This doesn't help me with developing the internals (definitely not TDD), it only verifies the end result.

Even worse if there is no clear response but the result is piped into some queue over the network or something similar.

I am either completely incompetent or I develop applications that are tons of hard to test stuff (especially with simple unit tests) with only tiny core of functionality that seems easy to test.


Here's a few ideas on testing web apps:

1. The internal pure data representation. If you're using some state management like redux or xstate then you can just cache the state on changes and run tests as-is (e.g. running the tests would be sending events and then checking that the state is what you expect it to be).

2. The page itself. Tools like Selenium and Cypress will allow you to easily inspect and run tests against the DOM state (e.g. running the tests would be emulating interactions with the page and checking that the properties of elements are what you expect them to be)

3. Checking the virtual dom (or other middleware). For example, with React, you can "get at" its internal data (at least the level you need) via a testing framework like Enzyme.

4. If your app uses webgl context or is simply too hard to test on any of the above "pure data" levels, you need some sort of image recognition to compare pixels. I doubt this is going to be _perfectly_ testable but you can try your hand at Sikuli or roll your own with OpenCV


Do you / commenters here really “enjoy” programming as a career as much as when they first had enough skill and enthusiasm to build things?

I think this is a big misconception among devs with less than five years of exp.


I started programming when I was 11. Taught myself C++ from that age. Solved tons of tough problems in personal and school projects before going into industry at age 22.

Now at 35, I definitely think it's time to move on. I won't lie about my lack of interest in the field. The problems are less difficult to solve, but also much more frustrating. A lot more black boxes that you can't reverse engineer when you get stuck. Working with other people in a big company means things move far slower than my ability to get things done.

Plus the proliferation of different tech stacks means that despite the big job market, most of the new jobs require specific knowledge that I don't have, and my existing experience with low level coding and problem solving don't carry much weight.


> Plus the proliferation of different tech stacks means that despite the big job market, most of the new jobs require specific knowledge that I don't have, and my existing experience with low level coding and problem solving don't carry much weight.

None of those jobs _require_ specific knowledge upfront. The problem is clueless fuckwits in HR filtering out CVs based on keywords. General problem solving ability and knowledge of fundamentals means you can actually learn any of the latest shitty frameworks in a few hours on the job.


>The problems are less difficult to solve, but also much more frustrating. A lot more black boxes that you can't reverse engineer when you get stuck. Working with other people in a big company means things move far slower than my ability to get things done.

The part about black boxes you get stuck on strongly resonates with me. Programming can be a lot of fun, when you can rely on your own, or well documented programming.

But enter the world of development in big companies where everything specification never leaves alpha phase, is done in retrospect, only on request, 3 month later then needed and if missing half of the absolutely needed information is the best case, it's just so miserable.


Yes and no. I like building things involving computers. I started programming as a hobby in the mid 80s when I was 12 in BASIC and 65C02 assembly. I don’t think I’ve done a pure hobby project since graduating from college in 1996.

I got bored with “programming” about 4 years ago and I belatedly discovered “the cloud” and started moving into more architectural roles I enjoy being able to do the full stack [1] and jump back and forth between the different parts without having to worry about either the infrastructure gatekeepers[2] or having to wait for resources. Any resource I need is just a yaml file or script away.

I also enjoy the force multiplier affect of being able to lead projects.

[1] full stack: front end, middleware, databases, CI/CD, CloudFormation templates, network configuration, etc.

[2] yes at a larger company I would expect to have access to a Dev account to play around with and get my infrastructure as code templates approved.


I'm actually writing an article on this now. My career in web development started over a decade ago. I'm done with it. I'm currently moving on to specialized interests (theory, ML, and BI). As for keeping up my chops, I've also switched to mobile and desktop development, as they aren't such a moving target and it's just so much more pleasant.

Web technologies were fun when I started back in the late 90s early 2000s. Learn the basics, build a server from scratch, throw a few languages and frameworks under your belt, and you were good to go. Of course, you can still build things simply, but good luck getting hired without the latest arcane tech stack in your resume (that will no doubt be forgotten in a few years). In addition, web development skills have become a worldwide commodity.

I am done with the web.


It's funny, when I started I found programming tedious and thought that writing some software with 100 layers of abstractions so computers can talk to other computers was boring and soul-sucking. I liked designing small embedded systems that actually did something in real life with a simple program in C or C++ driving the whole thing.

Then one thing led to another and I got a job more on the pure software side of things, and realized as long as there is some interesting problem to solve it's really not so bad what the end application is.

I find a lot of these comments are depressing. I feel like no matter how complicated the stack becomes, computers are made by humans and therefore can be understood by other humans. I always pity people working in medicine who have to debug what seems like a proof-of-concept that has been endlessly patched by evolution for the past few million years into a working product.

But hey, I only have 5-ish years experience working full time as an engineer so I guess I'm still a baby. I wonder how I'll feel about all this 15 years down the road.


I read years ago (on here I think..) the average programmer switches careers after 10 years. If you make it past that, you're probably in it for the long haul...

One thing I think a lot of us forget is how _lucky_ we are to be working on interesting projects/tech. I started my career as a maintenance coder and if I had continued doing that, I never would have lasted this long...

And yes, though not as often as I used to, I still have times when I have to scratch a particular itch for 24 hours straight or I have to jump out of the shower to test something that just popped in my head :-)


“Programming” is not a career, it’s a function of the job. It’s like saying all a woodworker does is cut wood.

No, a woodworker builds furniture, a developer develops applications.

It helps to think of pure programmers as assemblymen working on a factory floor, nothing more nothing less.


To clarify, by "programming" I meant the act of working in a mostly technical role and writing code / architecting systems. The contrary would be working as a PM, TPM or other product / strategy focused position.


I still do but I work in php.

Depends on your stack. Things just kept getting better for me. I was using it before and after the peak and it is still much larger than it initially was.

But hate


I just wish I knew as much now as I thought I did 35 years ago.


I was very lucky that my first job out of school was at a Perl shop with older, senior engineers. It felt like I hit the jackpot in mentorship. I learned more with my two years with them than the next 5 where I cycled through all the trendy frameworks and devops tools. I’m hoping to find another small, experienced shop like that and where I can give my next 10 years; willing to relocate and take a pay cut.


How do you find those though?


As a new developer, I'd frequent a lot of meetups to build my network. I always made a point to talk the oldest guys in the room as the conversation was always the most interesting. I happen to be a sucker for dot-com bubble war stories.


From the creator of TDD on why TDD doesn’t work in many highly innovative environments https://softwareengineeringdaily.com/2019/08/28/facebook-eng...


Point 2. "Code is a Liability" ends with the sentence: "Always strive to do everything using as little code as humanly possible."

On its own I consider this bad advice.

I would change it to 'Always strive to make your code as easy to understand as possible, while less lines of code and complexity is preferred'


“From around the age of six, I had the habit of sketching from life. I became an artist, and from fifty on began producing works that won some reputation, but nothing I did before the age of seventy was worthy of attention. At seventy-three, I began to grasp the structures of birds and beasts, insects and fish, and of the way plants grow. If I go on trying, I will surely understand them still better by the time I am eighty-six, so that by ninety I will have penetrated to their essential nature. At one hundred, I may well have a positively divine understanding of them, while at one hundred and thirty, forty, or more I will have reached the stage where every dot and every stroke I paint will be alive. May Heaven, that grants long life, give me the chance to prove that this is no lie.”

— Hokusai


The “no copy pasta” requirement leads to left-pad incidents.


I've worked in multiple systems where we've needed country-specific DoX() and you always end up completely separating them. You start trying to fit one core logic for all countries but it always gets reduced into nothing and you wish you had just copypasted the first country logic to the next to begin with.

Once everything is separated you feel much more comfortable introducing that specific Mexico change without messing around with the core logic. Of course things that are truly agnostic shouldn't be duplicated but that tends to be extremely narrow in the end and you never know what it is before you have a lot of countries. It quickly becomes a maintenance nightmare trying to fit broad logic to impossible flexibility.


Things i’ve learned after 20 years :

- understand and care about context. Understand the input and output of what you’re developing. Why people need it to function a certain way. And how people are actually going to use it.

- business trumps tech (most of the time). Not just in matter of how well a company will perform financially, but also in everyday developer life. For example : a different business model may imply a different tech stack choice (and that’s just one example). Understand and care about the business side of the place you’re working in. It’ll also helps you greatly communicating with your managers.


What relates to me is when he was gonna write how awful TDD is, then he actually tried it - and liked it! There are so many things we think are utterly stupid, but have never even tried.


Is there a single example of a notable great programmer who uses TDD?


Martin Fowler is an example.


What makes you think so? Have you seen any great code he has produced or are you aware of him having implemented something decidedly non-trivial? I might be wrong, but my impression is that he's someone who spends almost all his time telling people how to write software, not writing it.


First one was an immediate turn-off. Duplicating code once is fine. With the given example, there's no need to start building an abstract billing method and a data structure to encapsulate the differences in billing between different countries, because there are only two countries and one is new. Wait for a pattern to emerge before refactoring, especially when such refactoring will make the code more complex and harder to reason about.


"Do you know what correlates more than anything else with undesirable codebase properties? The size of the codebase."

Well yeah, more code means more bad code. I'm interested to know whether more code means proportionally more bad code. If not, his statement doesn't mean much.


What about copy/pasting interfaces between packages to avoid cross package dependencies



1. Interview skills >>>>>> Job skills, productivity, and ability to get work done.

2. Learn office politics and learn to do it well.

3. Everything else is irrelevant.


10 years of coding in trade for 10 years of middle management... why would I follow your coding advice? A serious question.


There's some silly Moore's law variant buried here: The programmer demand will double every couple of years.


> Learn yourself some TDD. You won’t regret it.

How?



J.B. Rainsberger's course is what eventually did it for me, after reading about TDD a lot but never managing to put it into practice: https://tdd.training.


Thank you everyone for contributing resources! I am slowly learning testing and I found it weird how there aren't many good resources.

I'm favoriting this comment so I can get back to this later.


Read the book. Worked for me.


And after you read the book do a simple toy project to try it out.


Going on about 17 years here.

It's not looking great for the future in terms of staying a developer. It's getting harder to find reasonable jobs anymore. My current one and the last one are/were disappointments.

I think the best words of advice are: Devise a career backup plan. I don't currently have one even though I have exerted effort in that direction. Probably end up a stocker at Home Depot or something. Hope not.

At some point, knowing another language or framework may be interesting to you, but it is pretty much immaterial unless you are going to use it on that job. For instance, I love Kotlin, but there's no Kotlin developer roles in my market hardly. Waste of time. Same thing happened with Clojure. I never learn.

You will be too expensive: I recently went after a Kafka Architect role which I should have been a slam dunk for but when it got to price the other side suddenly got all wobbly, excuses appeared, and poof, it disappeared.

Companies want cheap, throw away talent. I will say it again: Companies want cheap, throw away talent.


Seven different organizations have been paying me (in part) to write code since 1993.

> Companies want cheap, throw away talent.

I've read and heard about this, and I think it's probably true in many or most cases, but I haven't had this problem in my career.

I'm better compensated now than I ever have been, and when I want to move, it's easier than ever to get a good position.

A little bit of context: most of my professional career was in the south and midwestern US, but the last decade has been in the San Francisco bay area. Also, while I've always been a software engineer, I've always worked more on the 'operational' side, often called 'devops', 'SRE' or 'production engineer' these days.

I wish you the best going forward.


Would you comment on what made your current and last jobs disappointments?

I don't even know what the baseline is for satisfying programming jobs. All my jobs have been similar in terms of points of frustration.[0] I've just kind of accepted it as normal.

[0] I would elaborate but don't have time right now. At work.


> I don't even know what the baseline is for satisfying programming jobs.

For myself, it's always been about the team I was a part of. When you are part of a team that takes pride in their work, and they perform well together, supporting each other, learning from each other, etc - no "prima donnas", and they all have fun doing it - it leads to a good day of work.

I'm lucky in that for most places I have worked - from the first shop that took me in, to my current employer - this has been the case. I can only think of a couple of places I worked where that wasn't entirely the case; both came with excess politics, and other stress that I just didn't (and still don't) understand. I stuck out with it at both, but I'm glad I'm no longer a part of either.

One thing I do know and understand - this comes from someone who started their software engineering career when they were 18 years old, and now I'm pushing 50 - every place I have worked that I have had this "good team quality" has been a small employer, usually with fewer than 50 employees. My current position, there are fewer than 20 employees.

No - I don't make, have never made, and likely never will make "FAANG" money - but I'm not stressed either, and I make enough to keep a roof over my family's head, the lights on, and food in the pantry. I'm content, and if an opportunity comes up for a FAANG-money type position - I will have to look long and hard at it, before making that leap - because I feel that it might come with costs that just don't make sense to me any longer. I'd rather have a more "basic" salary and little stress, than a lot of money but stress out the door due to a myriad of reasons in such a workplace. That isn't to say that's inevitable, but I've just found that as the employers I've worked for were larger, the less my satisfaction with going in and the job overall.


Can you be certain FANG jobs have more stress? Some rules inarguably would, but I hear of more people doing the standard 9-5 at FANGs than anything else. They do quality work, they do it in a reasonable amount of time, and they spend the rest of their non-working hours on other things in life.


I'm just entering the field (19-yo), and I don't have an particular experience in the field yet. From people I've talked to, jobs that you describe are the best options. I'm particularly interested in government jobs for these reasons.


At 19 work everywhere. Startups can offer so much value at your stage with freedom if you choose right.


I wonder if there's space for some boutique shops that specialize in sobriety.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: