These are probably the most important points here, and for pretty much any other creative process you can think of. So many good things get lost because either nobody had a backup, or the only backup didn't work, or was from post whatever disaster nuked the system in the first place.
Source control fixes this problem, but only if you remember to use it properly in the first place.
Treat it like the save system in a video game; save and commit before any important changes, and after you do anything important in general.
Depends on the project. New features and things do get separate branches that eventually get merged in once tested thoroughly, especially for website development projects.
But it definitely depends. A website usually benefits from feature/bugfix branches more than a game or mod does, since the latter usually has the parts so atomised that your changes to one would have basically zero effect on any others.
Yeah, right. Failing is what gets you fired. Basically it’s a collection of safe, saccharine little pseudo maxims supposed to make everyone feel better
I prefer the poster with the cat hanging on to the branch going “hang in there”
I have been at three different companies in my career and while I know that's not that many, I can only job hop so much and I have at this point simply given up on believing that a company with good culture can possibly exist. And if it does, it's only like 5 people tops and it'll break down if/when it tries to scale.
Every leader views his workers as replaceable feckless layabouts, front line managers are the designated whip-crackers with far too little actual power, middle managers are stressed out fear-driven creatures with both little real authority AND no whip to crack over the workers, and leaf-node workers are depressed tired cogs in a system that cares tremendously little about them and cares increasingly less about even pretending otherwise, especially post-layoffs.
And yet the software still needs to get written. I cope by just doing what I can, not trying to do anything the system doesn't support me in doing, and making as many friends as I can along the way. And putting my family first every single time.
Name calling because someone has a different experience than you.. You know that can’t possibly help any discussion, even if for a moment you feel good about being a bully.
That's a good description of modern software development. For any junior devs reading, this is the accurate presentation of the reality of software development for the most devs. There are exceptions, but they are either lucky and know it, lucky but with a superiority complex who think they have a monopoly on the truth, NPCs, or the usual gaggle of people who disagree with everything I post here anyway.
Think about it: How can JavaScript have so many innovations, yet practically no one has moved on from the scrum/kanban abomination in going on a decade? Perhaps more. Why
If you focus on the shit, life will be shit. Focus on what's in your control. Maybe not a lot is, but the little things are. What time you go to bed, what you eat, how you spend the little free time you may have. Take control of that first, then build off of it. Step by baby step, you'll crawl out of the abyss, and remember that you aren't resigned to eating shit the rest of your life.
Here's it shorter: Take control of the things you can control and accept the things you cannot.
It's good advice. If you spin your wheels on things you have no control over, it's a waste of your time and energy. Focus on the things you enjoy, and the things you can change.
Because I also know people who've failed and got fired. Quite a lot of them, in fact. As in "they failed and subsequently got fired".
The fact that there's some sample of "failed and fired" AND "failed and not fired" - even if this was 80/20, one still has to conclude that there's more than just failure at play. Inconclusive evidence to say it's only failure involved..
My experience hasn't been 80/20, it's been more 10/90. My whole company has people failing and founding around the place, I've only seen two people fired:
1 who blamed anyone else but himself, and another who never asked for help and wanted to save face at any opportunity.
Failures are a part of life, if you are at a company that doesn't acknowledge that I suggest perhaps finding someplace else for your own good
Places that won't tolerate mistakes are arguably places you don't want to work anyway.
Huge management red flag for a bunch of reasons. It means people will start to tell their bosses what they want to hear. Instead of products you'll build a fractal of potemkin villages.
> Places that won't tolerate mistakes are arguably places you don't want to work anyway.
That's hardly helpful advice. You don't have recruiters telling job applicants that they do not tolerate mistakes. By definition, employees get no warning before being fired for making a mistake, which is hardly the situation for advice such as "you don't want to work there anyway". When that happens, all there is to do is gather all your stuff, rethink your life, and update your resume, and prepare to explain in the next job interview how your last job finished so abruptly. Neither of these steps is in your best interest.
It's on you to figure that sort of stuff during the interview process.
The interview is not just where they figure out whether they want to hire you, but where you decide whether or not to accept their offer.
You are allowed to interview the interviewers. In fact, you'd be a fool not to, since this sends a positive signal that you are interested but not desperate. Makes you seem like a quality hire.
It's a pain, but it's the reality that we're stuck with. If you go out of your way to hide failure or burn yourself out trying to never fail, how is that better? What's a realistic alternative to accepting that you're human and might make mistakes, and accepting that some places are simply too dysfunctional to be able to handle it?
That's really not the point. The whole point is that it's not possible to know with any degree of certainty whether a given position will have more or less tolerance towards mistakes. That can be determined by anyone in your organization, from CEOs to your own team members, and certainly that's not advertised by recruiters. It's something that you can only tell after you're already onboard and head-first into your job. And then what? are you going resign and job-hop to yet another unknown? That itself burns through a lot of goodwill, as you're trading your escape from potential burnout for a limited chance of avoiding that issue attached to a resume that labels you as unreliable and finicky.
It matters nothing if anyone claims that making mistakes is human nature. What it really matters is whether your organization employs people who weaponize mistakes. Another fact of human nature is personal ambition, and there are far too many people who don't mind throwing others under the bus to use them as stepping stones in their career paths. Some of those types succeed, others try until they succeed. What's your answer for that?
Not in my experience. For one thing, this is usually what the peer review process, tests and QA team are there to fix. Almost everyone's work will have at least some things that need improvement/fixing before they can go live, but there's a process to make sure they don't go through into production without those fixes.
If the issue actually happens in production and say, a bank's payment API goes down that may be a different story, but even then I've usually seen the managers say it was a process issue rather than finding someone responsible and calling for their firing. Or as the old story goes, treating it like an expensive learning experience.
>For one thing, this is usually what the peer review process, tests and QA team are there to fix.
The QA team are not your personal team of muckrakers. The more you treat them as such, the faster we burn out, because it's our job to say no, and saying no is a recipe for interpersonal disaster or attrition in an org, especially once management inevitably starts overriding the department consistently, regardless of the amount of risk that they end up taking on in spite of it.
Where's big things easily forgotten? Ohh yea, in common libraries... auth system, email system... all the things I used to build from scratch... migration system...
We build these things from scratch because of all the lessons we learned (mostly about not building them from scratch) the last time we built them from scratch. :)
Definitely true, most of what I've learned building stuff from scratch has been about not building stuff from scratch.
For my own projects I generally think YA(aren't)GNI that original feature, but YA(re)GN(or at least want)I that framework instead of vanilla, or that declarative config tool, or those type Annotations, etc.
This one time I got told off for reading Slashdot at work. The next day our tech lead couldn't figure out why the log files didn't contain anything recent after our control program crashed (which tripped a watchdog timer and hard-reset the controller). I got to explain how I learned from a Slashdot post that ext4 caches writes in RAM and can lose data if power is removed unexpectedly. That was a good day.
Oh man, visual tabs is a productivity killer. Every time you want to change buffer you have to grab a mouse, find it and click on a little tab? Atrocious.
Bind two keys: one for "open previous buffer" and another for "open list of all buffers". And disable tabs completely, they don't bring any useful information! Only add a chore to clean them up...
This one resonates with me. And a follow-up thought: even if I am stupid, it'll just take longer. The idea being that intelligence doesn't limit what you can do, only how fast you can do it.
I guess all of that can fit inside a prompt to an llm and still leave room for other input. Wonder if it would improve coding agents such as GPT Engineer?
Tell me you write throwaway CRUD apps without telling me you write throwaway CRUD apps. If your code touches the real world, human lives are often quite literally in your hands. This idea that you can just chill out, yeat some code into prod and head to happy hour will get people killed or seriously injured.
I mean, percentage wise it's likely the majority of jobs in software engineering are in fields where people won't get killed if things go wrong. Very few people die because a website goes down, a video game or desktop program has bugs in it, or most businesses lose a bit of money/have downtime.
Yes there are definitely fields where people could be killed if things go wrong, perhaps in software for cars, planes, spacecraft or medical devices. Or for dangerous fields like logging or mining, or the military.
But I suspect the ratio of people working on software that's mostly/entirely harmless compared to software that has lives on the line is probably like 80:20.
These are probably the most important points here, and for pretty much any other creative process you can think of. So many good things get lost because either nobody had a backup, or the only backup didn't work, or was from post whatever disaster nuked the system in the first place.
Source control fixes this problem, but only if you remember to use it properly in the first place.
Treat it like the save system in a video game; save and commit before any important changes, and after you do anything important in general.