Hacker News new | past | comments | ask | show | jobs | submit login
Absolute truths I unlearned as junior developer (2019) (monicalent.com)
461 points by ddtaylor on June 6, 2022 | hide | past | favorite | 258 comments



The truth I had to unlearn is that coding is a solitary activity, and that coding is the most important part of a senior software engineer’s job. Now I rarely have the chance to code for a few hours straight because if I have enough information to write code then all that’s left is the easy part. The hard part is coordinating, defining the problem, planning for the future, and communicating the current status of the problem. Not to mention onboarding newer devs who haven’t learned the technologies yet.


I hate that this has become the norm. Sometimes, even most of the time, tight clean code and proper data structures and clean smart queries are vital for both cost and performance reasons.

That smart(experienced) people think up how to approach a problem and throw it over the wall to the juniors makes me sad, even if it has become somewhat commonplace.

I was actually offered a position doing exactly that for a rather successful, nonFAANG valley company. Literally 'decide how to solve the problem, then hand it off and never think about it again.' It sounded awful for everyone involved. I'd just write in abstracts and never code, and the coder just codes without thinking a ton. Sounds terrible for both party's growth.


Thank you. Agreed completely

I see software as a form of literacy, and it both amuses and saddens me to hear things like "I used to write code, but since I moved to management I have stopped"

We don't hear phrases like "I used to read and write English, but since I moved to management I have stopped"

It seems sad to note that this "move to management" is now also becoming the "move to senior engineering". This indicates to me some problem with job title inflation - like there are not enough manager slots so the management layer is moving down a rung to senior engineer.

Edit: of course Linus Torvalds wrote more emails than code. Which tends to indicate something.


Consider the manager who tries to put in a couple hours a week helping with filing (as in, organizing paper files). Let's say the filing system is very busy and constantly evolving to meet new needs.

Odds are they're just going to mess things up and annoy the people who do it as a major component of their job. Now, it may still be worth it, to keep the manager somewhat aware of how workers are doing their jobs, but it's not going to be helpful.

You need literacy to work on filing, as you need programming ability to do software development, but in either case it's not sufficient. If you're just dipping in here and there, you'll likely be screwing things up more than you're helping. There's too much context required.

Reading, writing, and programming may be forms of literacy, but if you try to jump in with tiny and inconsistent efforts to help someone write a novel—and I mean the actual writing, not helping them with organization or sales or something—you're probably doing more harm than good. Collaboration is possible even on novel-writing, of course, but very part-time efforts aren't likely to improve anything.

Programming is the same, and that's why once someone gets past about half-coding-half-managing it gets harder to keep helping with the coding, productively. It's not about literacy—the person trying to help with an hour or two of novel-writing a week is literate, after all—it's about context.


Exactly.

I've been a manager for a while and don't do much day to day hands on keyboard coding. I'm still able to help, sometimes quite a bit, but it depends on what the problem is.

If someone on the team is struggling with the gnarly domain specific application logic portions, I will usually direct them to a more senior teammate. I'm more likely to add confusion than reduce it. I'm not in those weeds enough to know how things are changing on a day to day basis. I understand the broad strokes, the roadmap, and where the senior devs are taking things. But I might not be able to tell a dev what the gotchas of their idea really are.

Where I find myself most effective is when I can leverage my experience as a developer in general. I can help them with pros and cons of technology X vs Y, pattern A vs B. I can point out things they might not have been aware of in terms of peculiarities of the tech stack and how it might change things - for instance bugs that are only clear if one understands the details of that standard library call. How to diagnose and debug a problem in an effective manner. How to research a problem.

tl;dr I can and do still talk code and help out my team. But the further one gets from being able to rely on my general experience instead of being embroiled in their day to day back and forth, the less effective I am at it


Not exactly - there are about three things in play

1. You cannot alter a company with code in the way you can with written words. If we split the compmay-as-a-machine and company-as-people-operating-(in)-the-machine and compmay-as-people-adjusting-the-machine then the difference is more stark.

a company as a machine does run on written (and often unwritten) policies that humans adhere to. More and more now there is code doing the actual machine but but it is rare if ever it is a whole piece - maybe there are whole companies totally automated but so doubt it.

As such a big important part of being a manager - changing the machine to be more efficient - is just not (yet) possible with code. but when it is, managers will talk code and write code.

2. I am not sure where to go with the filing part. Yes in many companies there are so many jobs to get done that anyone getting their hands in and automating anything is a huge help. That person would be making a part of the company-as-machine. good. presumably they would do it ten hours a week to a professional standard.

is it the best way to organise things. No - again torvalds had hierarchical arrangement and hardly ever write code - but that belies the reams of code he wrote as examples, discussion documents, strawman and just test code.

Manager should be writing test frameworks or tools to build the documentation in Japan or whatever

3. The whole "pointing people at someone else who is expert". scream bottleneck to me. Screams that people are in silos, there is not enough brown bag lunches ot other ways to discover / get repeatedly told about how other areas of codebase work.

4. things like weird bugs and effects of the tech stack are great - exactly the sort of thing to put in a code review. Or a code review of reviews (which I think is another good managerial practise)


“Code is a liability” “Don’t solve problems with code that can be solved in other ways” “Code is grunt work, I’m an architect” “Projects fail from people problems, not technical problems”

There’s truth to all of these, and yet the people who repeat them dogmatically are often programmers or managers with ultra inflated titles who write blog posts or emails all day ;)

They probably didn’t even like programming and saw it as a fast track to management.

I’ll take the real programmers who get stuff done, thanks.


If you get developers who get stuff done well - sales people have so many stupid ideas that implementing these would hurt my brain.

Imagine implementing everything that pushy sales people throw at developers.

I only agree that "code is a grunt work" is wrong simplification but picking out which code is really valuable to write is still in my opinion much more important than just slinging out code.

My idea is that there is no "work smart or work hard" - first you have to make sure code to be written is valuable and then you work hard to code it.


> “Code is a liability” “Don’t solve problems with code that can be solved in other ways”

In my experience managers tend to parrot those, but it is almost always coders (technical managers who get their hands dirty still count) who actually knew when a problem can be solved without code in an elegant, scalable, way.

Again IME, when managers are left to their own devices for those decisions it's often "send an email telling users they can just [mis]use feature X for that", or "yeah just install Intercom and support will handle" or "ask users to open a support ticket to perform this feature". Solutions which will often creates more problems for support people, customers/users, non-technical users, or even developers later because often they have have to do something with non-structured chat data.


My impression as well. My thoughts then are usually something like the following questions:

"Oh? Have you really sought out all the good resources and learned from them? Multiple different paradigms? Countless projects exploring ideas? Many different languages, learning their concepts? How come you stopped liking to code? What made you lose liking to make the computer do your bidding? Did you ever really like it? If you did not, did you really go all the way you were able to, in order to explore all the things? Do you really know as much as you claim to know? Or have you been 10 years in the same <you all know what lang here> mainstream OOP job and only feel like your time spent doing the same thing over and over again warants you a senior title and you should move on to management, because 'there is nothing more to explore'?"

I do not usually ask these questions. I rather observe and might indirectly poke for some knowledge. When I do ask some of those questions, I usually get a reply like "Meh, programming language does not matter, it is all the same." -- The usual "I don't want to have to learn more." type of response. There are many variations of this response, for paradigms, concepts, programming languages, you name it. Usually there is some overly broad generalization in it, overloooking benefits, that one approach might have over another, because they never tried or learned that approach and have no experience with it. When I hear that kind of response then I know what I am dealing with.

The person is free to show by their words and actions, that they actually _do_ have that knowledge. Otherwise I will just accept, that this person does not love coding the way I do and that they do not have a drive to go all the way of exploring so many different concepts and things. That's totally fine, coding might not be for them, or they might not like it to the same degree (and they do not have to), or they might not have been as lucky as I was and did get continuously in touch with new exciting things to learn about. Maybe they did get stuck in that <you all know what lang here> OOP drudge and did really not see anything new any longer.

Whatever it is, I just hope people don't simply assume, that just because "they have been coding a in the past for x years", their experience is the same I have. I do not mean in experience quantity levels (years), there surely are many people longer in this hustle than me, but in individual experiences and concepts one gets in touch with, when exploring off the main road. There is so so much to explore and learn about. One can probably learn ones whole life and not have seen it all.


I'd argue more people severely overestimate the need of communication quantity and management roles/responsibilities have inflated everything as a result. Worse, people who want to primarily talk are going to advocate for others to talk more.

Regardless, whoever brings up what GP does is frequently bombarded with the old "communication important" or "soft skills matter" spiel in an attempt to validate the explosion of communicational requirements.


Many big modern organisations are built by bureaucrats whose specialisation it is to create more bureaucracy. Thus they spend a lot of time thinking up ways to measure the work performed from various angles, create reports for their superiors, and find other ways to ride on the work of others to earn their salary instead of doing actual work. This propagates downward and soon everyone (but perhaps the most junior contributors) is working in politics about meta-work, rather than doing the actual work.

It takes rebel managers to go "no, I won't participate in this seventh time allocation study" and instead do actual work to break the cycle.


Precisely this. I've seen every size and type of organisation from single digit employee count up to tens of thousands in huge bureaucracies.

Inevitably, the paperwork becomes the goal, and no amount of pleading with middle-managers will convince them that their job is unnecessary. If they were to believe us, they'd have to quit and become unemployed.

Even in my relatively small consulting org I have to actively fight against the rising tide of overhead.

Just recently I tried to explain to a manager that status reports are not the most important deliverable. That the actual product is, and reporting about it is secondary.

They were aghast.


Even more shocking: if the manager was active in implementation -- even just a small part -- they would only need a fraction as many status reports. They could just talk to the relevant people instead.


I strongly believe that you can dramatically improve the productivity of most companies by firing 50% of middle management and using the money to hire more people who actually work on the products/services the company is selling.


Really need a "clean communication" movement just like we have a "clean code" movement.


I remember reading a thought experiment: after a fire drill when everyone is in the parking lot, the CEO can say,

"Everyone who was in an actual call with a customer when the drill started can get back in. The rest of you stay out until someone dealing with a customer asks for your help."

I imagine a lot of people in HR, marketing, and various other paper-shuffling positions have to stay in the parking lot for quite some time. Not to mention many executives!


> I imagine a lot of people in HR, marketing, and various other paper-shuffling positions have to stay in the parking lot for quite some time. Not to mention many executives!

Most of the IT people also...


In some organisations where many customer-facing employees are fairly IT literate, yup, definitely.


What matters is not communication, but shared understanding. This is of incredible importance to get written code to actually matter.

One gets the shares understanding through communication. If that is going poorly, more communication is required. So whilst we agree an abundance of communication is a bad thing, we disagree on why.

It's easy to hate on time wasted on communication. Disruption to your process sucks. But it's not wasted time if you don't have shared understanding yet. It's crucial in that case.


>If that is going poorly, more communication is required.

No, you need better communication. Start decoupling "more" from "better". That's the crux of the matter, people trying to bruteforce communication issues by adding more communication.

Most people suck at communication. Including 99% of the people applauding themselves or each other for "great communication". We're not solving these issues layering more communication, that's how orgs die under their own weight. What they need is better strategies and to stop boggling down ICs with redundant bureaucracy, as sibling points out.


Let me rephrase:

If your communication is bad, then you will need more communication to get to shared understanding.

Yes, better communication would also be a great solution. But "if you are bad at X, just do X better" is meaningless advice. Useful advice is either how to fix the problem (advice on how to do X better) or less popularly but often very useful, how to mitigate the problem (advice on how to avoid bad outcomes from being bad at X).

Turns out that more communication is essentially the only way to mitigate ineffective communication.


>Turns out that more communication is essentially the only way to mitigate ineffective communication.

No, it's not. This is the equivalent of saying "if your writing sucks, just use more words". Anyone reading an overly terse, jumbled mess can tell you that's not the silver bullet. Somehow, that is the solution we go for whenever management is involved, until they figure out "oh crap maybe boggling our ICs with bureaucracy is in fact a bad idea". Layering on more communication is not a net positive by default, not even in a world where time isn't a factor.

Heck, we both know most people have neither the mental capacity nor the note-taking diligence to keep up with this all. The system is already showing signs of oversaturation with how many go "what did we discuss again last meeting?"

>But "if you are bad at X, just do X better" is meaningless advice

Several others have already given examples. If you don't have information, stop making 30m-2h meetings when everyone already knows the answer is "we have to do more research", followed by actions such as "make a few quick prototypes and test with the client" or "have the front-facing people ask more questions". Use audio, visual, video media to communicate what becomes terse in words. Use less words and less meetings overall, where possible.

Start thinking about what truly adds value and what is a horrible proxy with near-zero evidence behind it. The default attitude is "never delete until evidence proves otherwise". Anyone looking at Brook's Law can understand how this goes wrong.

I shouldn't even have to explain this. I'm a consumer of the culture, not a producer. The fact ICs have to tell management types how to do their job better is ridiculous: that's management's job to begin with, optimizing communicational processes. I don't expect clients to tell me more than "the product is slow" either, even if I'll take any bit of information they can provide me.


I agree quantity isn't the solution if communication quality is poor.

Communication is more than just talking at people. It's about having healthy boundaries, positive conflict resolution styles, being able to reach a decision quickly and explain it clearly and concisely to others. It'd about anticipating blockers and clearing them, streamlining process to keep things moving, being accountable and presenting security in yourself and your actions even if you might doubt them. But most of all its about listening and letting the team do their jobs.


> If that is going poorly, more communication is required

That is like saying "if your program is performing poorly, more code is required".


I meant something more along the line of "if your code is ill-optimized, you gotta throw more CPU time at it".


Well, do you think that works?

When you do that, what's going to happen is the developers are going to think they can get away with making it even more inefficient. Not intentionally, of course, but the signal you're sending them is "it's fine if it's slow, we'll throw more compute at it" and that's the condition they will be solving around.

So the next time they choose between flashy feature and fixing a performance problem, they will create the flashy feature. "We can let the hardware acquisition people deal with performance."

It might even look like it works, at first. You get features out quickly and the thing runs almost acceptably.

But then months or years later, you're spending a shitton of compute on performing something that could be done with a couple of pizza boxes. And everyone is saying "well it's too late to change now, everything is designed around a HPC assumption."

And now you're stuck bleeding money on things you shouldn't have needed.

If I sound sore it's because I have lived through it too many times.

You should never, ever solve the immediate problem without looking at what long-term reinforcing feedback loops you're setting up in the process.


> We don't hear phrases like "I used to read and write English, but since I moved to management I have stopped"

Maybe we do. Wouldn't a better analogy be professional writers moving toward an editor/director role?


You're right! An editor probably reads more than all their authors combined, and provides feedback on what isn't up to standard. My engineering managers were expected to do the same. An engineering manager who does not read all the code and PRs is as helpless as an editor who cannot read. They cannot know who is performing well, who needs coaching, who is all talk but makes a mess, etc. The idea that engineering managers can be excellent but code illiterate keeps showing up, but makes no sense at all.


The sentiment I've seen is that a great people manager who trusts their engineers despite not knowing much about software can still be an above average manager.

Given that the median manager seems to be throughly mediocre, an above average manager even if they can't code or read code seems desirable.


I remember reading about an actual observational study on this -- I think on HBR -- and the conclusion was basically that managers that are technically skilled have much happier employees on average.

Another finding from the same study was that the statements "I feel like my manager can do my job" and "I am not looking for new work" were strongly correlated. (This sounds like the same thing but isn't.)

In other words, you have to be one he'll of a people person to make up for cards stacked against you.


I think either end of the spectrum is fine: highly skilled / can do your job if needed, or not technically skilled at all / honest about it / tracks schedule / delegates all technical authority. The messy middle is where the problems happen.


You can't be a coach and a player. It doesn't work. Having technical skills and being able to make or weigh in on big picture decisions is great. But you have to trust your team to do their job, even if it isn't exactly how you would do it. Does it meet the business need? Does it make sense strategically. Is your team staying unblocked? Are you being reactive or proactive? Are your team members growing and improving?


I disagree. My best bosses have been player/coaches. You usually only find this at smaller companies.


I must live in a alternate universe. At a previous company, the last two "engineering managers" I've worked with definitely could NOT read code. They did not read PRs, other than perhaps the title and Jira it linked to. Any technical management, including PR and design review, was delegated to the staff or principal engineer(s) on the team.


> We don't hear phrases like "I used to read and write English, but since I moved to management I have stopped"

You surely would hear things like “I used to write research papers but since I moved to management I have stopped” or “I used to write fiction stories but since I moved to management I have stopped” or “I used to write legal briefs but since I moved to management I have stopped” etc. if you talked to people in fields where people’s focused work is writing.


"I used to write fiction stories.." yes but they have not stopped reading and writing english on the job. In fact I bet that reading and writing english is still the job.

Things I think software management should be doing

(besides fighting politics that again should not exist but does because corruption)

- review of reviews (how is the code review process going, i'm on problems, common wins)

- code analysis - hot spots, changing idioms,

- keeping up with rest if company - what code issues / infrastructure / operational / metrics are coming, how do we adjust


I see it more as leverage, the senior who understands good coding style and performance implications is far more useful when they can drive decisions across the whole codebase, instead of narrowly on one small feature. I’ll still take small features occasionally, but it’s more of a way to show juniors best practices so that they can improve the code they write, which is a win win for everyone.


> We don't hear phrases like "I used to read and write English, but since I moved to management I have stopped"

I had to chuckle a bit at this one given how some VP+ folks write emails.


I don't understand this, this is the way every hierarchy work, and how every single job/management role is functioning. It's to solve the problem of communication overhead growing exponentially with larger groups of people. Why does this make you sad? And is it better to have a dysfunctional organisation where people float in and out of poorly defined roles and everyone tries to do everything? Does that really make you more happy? I don't get this emotional take on specialisation.


It is better to have a functional organisation where people float in and out of roles defined by the expectations of others and everyone is capable of judging basic business tradeoffs, yes.

That, in my experience, makes people happier. They get to focus on important problems, help people they know, and develop their well-roundedness as human beings.

I think you might be underestimating the amount of overhead that is added by a heavy bureaucratic hierarchy.


This reminds me of a very interesting comment on the engineering and management practices at Intel [1]: "I often called Intel an 'ant hill', because the engineers would swarm a project just like ants do a meal."

I've seen it happen at smaller scales, great engineers in flat hierarchies without direction might not be the best idea from a business perspective...

[1] https://news.ycombinator.com/item?id=31571560


Who says they have to be "without direction?"

Only in a very hierarchical organisation are the lowest levels without direction -- because the higher levels maintain their position by keeping important information secret.

If the important information (market signals, experiment outcomes, financial data, etc.) is made available to everyone, and everyone receives a sliver of training in interpreting it, any group of engineers worth their salt can make responsible decisions in the right direction. (Often much better than a small set of executives would.)


I like the idea, and I've seen it work for teams focusing on a single project/product. What's unsolved for me is how to scale this.

Interpreting data takes time, figuring out a strategy that spans multiple projects and years takes time... not sure this is workable to do individually. I'm all for being transparent with goals, that would be a given for me in any kind of organization - hierarchical or not. But somebody needs to keep up with the ideas of the engineers, customer requests, business goals and changing markets to put everything into an actionable strategy. In bigger orgs this is an ongoing process and requires full-time dedication... It would be really hard (but very interesting) to come up with a process to 'crowd-source' those things from 100+ engineers, skipping the middle-management positions.


That sounds incredibly amazing. Utopian.

But I doubt it is possible even if all humans involved are incredible. You still need coordination.


Yeah this annoys me to no end how tech people pretend that they have casually invented peace on earth, and act like it's the most obvious thing in the world. "Of course large groups of people simply just get along perfectly and efficiently without any coordination" Yeah right.

It's the people who are dysfunctional who thrive in these environments because they don't have to be accountable, so their issues just disappear, and people who actually function and take their job seriously will burn out and go insane in the chaos.


Large groups of people with a common goal can coordinate within themselves. They don't need to hear "do X, now do Y" from someone else.

And if they do, they can appoint that someone else on their own -- it's how the world's free countries operate, after all; and a country is bigger than a company.

The only reason people think this doesn't work for companies is that they haven't experienced the "common goal" part -- management bureaucracy discourages caring about the common goal, instead focusing on encouraging obeying direct orders.

(And then it goes on to redefine "obeying orders" as "coordination" to prevent anyone from seeing what's going on.)


And if they do, they can appoint that someone else on their own -- it's how the world's free countries operate, after all; and a country is bigger than a company.

That's still a "manager" someone to manager better coordination. The point here is that such a role is required to get work done.

Sure, there are weird things that happen when the manager stops being a bottom-up appointee and starts being a top-down ruler. Heck that is incredibly common. But that does not mean we should do away with central figures that handle coordination. You still need those central figures.


Well you have the burden of proof for these extraordinary claims, in what way is this different from the pitch of a cult?


The book "Turn the Ship Around" by David Marquet is basically along these lines. He worked to turn a poorly-performing submarine in the US Navy to one of the best. The gist is that he enabled autonomy and shared vision to reduce the top-down heavy handedness that they were usually used to, allowing for more efficient decision making.

That's a gross generalization, and it is still very hard to conceptualize, but thought provoking.


I've worked under these circumstances before. It's not at all utopian, but I was definitely much happier. And yes, it is a bit cultish, but who cares? I'm an adult and I know it's just a job—If some cultish behaviour helps people who otherwise wouldn't care to know each other work together, then I'm all for it. It just requires transparency.

And of course, that isn't for everyone. I know people who hated working like that and left, and that's totally fine. Just don't be dismissive that there are other ways.


What would you consider sufficient evidence?

Not that I can't come up with a lot, but if it's trivial to prove I have less work to do.


It's not that far off from my experience in a research organization.

We underestimate the commitment of others to helping the organization that pays for their food's success (even though we feel the commitment). Coordination is needed but if you trust and empower the ICs you can communicate a high level vision and then just look out for major problems and opportunities rather than micromanaging the people who are doing roughly the right thing.

It's a model that doesn't work everywhere but it can lead to increased creative output, happiness despite lower wages, less need for middle management, and other benefits to an organization.


You need coordination. You don't necessarily need hierarchical coordination.


Coordination without any kind of hierarchy has N!/2 complexity. That gets overwhelming way to soon. Have an idea that requires everyone else to change something. N!/2 conversations to have. (Or one big meeting with the same sort of complexity). Need to change your approach to match what others are doing, gotta make a 1-on-1 connection. If they need to change their approach, they need to coordinate with others, continue for a long time.

If you want any kind of efficiency You need to have small-ish teams. I'd guess about 10 people. But lets say 50. You need to chunk up work so that teams can work in parallel. You need central oversight to coordinate the teams. This can be just a group meeting of team leaders, but the big picture should not be lost. And you need to make some decisions from this central picture.

All of this very quickly leads to hierarchy.


Isn't hierarchy standing in for encapsulation here? Companies interact with one another in a coordinated way with neither a hierarchy, nor needing to know what every other company is doing


Is a hierarchy the right way to organise a company?

communication does not have to happen from manager down to lower manager and then manager to reports. Why can't one guy at the top just email everyone (and even that does not have to be top down, answers received from God on the mountaintop but can be part of an active conversation (cf Torvalds).

One way of looking at this is hierarchy works well for an organisation where the people are doing most of the actual work (ie an army fighting). It does not have to be the right solution where the actual work is code that will then do the actual work (ie Google's ad market place is run day to day by the code. when Google makes a chnage then they are to all intents releasing a new company that does things in new ways.

once upon a time you had the same people in the same roles and then turned to them and said you are going to (sell ads) in a different way. the distinction between an organisation and what it does was blurred. but with code there is a clear distinction.

I even go so far as to say that coders are the new managers - managers used to be needed for designing an organisation that would perform.


I’ve been thinking of coding as management too.

Considering the senior GIS focused dev at my work: to do what he does, but in the 70s he’d be a VP of a division of analysts and mapping techs. In 2022: he’s a programmer/maintainer on a project.

The co-op ramping up on the same program? Back in the day she would be a senior analyst on the management track under the VP. In 2022: a junior coder we can assign tickets to, with senior dev not as a boss/superior but as a guide.

Computer programs automate business processes that would require large orgs and rigid hierarchies 50 years ago.


>Is a hierarchy the right way to organise a company?

I think so.

I've twice worked as a software engineer for flat organizations (once as a subcontractor, once as a normal employee) and I really, really dislike it. Strong leadership is important.

There ends up being chaos, uncertainty, and a distinct lack of accountability.


It is sad, because the hierarchy is not solving that problem, instead the problem is moving down the hierarchy.


Good analogy. Imagine Swift writing a few ideas...guy sees land of giants, land of little people, add some pirates, etc and handing it off to some highschool kid to write! (I don't know why that's the first book to come to mind)

In my career, my most rewarding, fun, but also frustrating times are being handed problems I don't know how to solve, even moreso when it seems nobody had tried to solve it before.

Where would we be if people like Bellard just wrote general ideas, but never any code? Without ffmpeg and qemu, probably.


Moving senior engineers into management is the smart thing sadly.

My experience with F500 companies, doing anything causes an incredible amount of grief.

The most productive thing for seniors IS stopping coding.

Going to meetings is the only way to deploy ANY code to production.

The problem isn't even software.

The problem is "new software" represents change.


Sadly it is a smart thing.

But not only to get any code to prod.

From my perspective it is to stop loads of stupid code/stupid solutions even way before it is written down as a task on your favorite task tracking tool.

That is what meetings are for as well.


You seem to be arguing into promoting seniors into architects. The GP is arguing into promoting them to management.

If the one hard problem your place has is organizing all the easy problems so they add to each other, turning your seniors into architects makes a lot of sense. And that's a very common situation.

But the GP's motivation for moving them into management is basically that the organization is dysfunctional. That's not a good reason, although it may be the only thing you can do.


I am mostly thinking about technical team lead role which is not an architect and not managerial role.

Team lead does not have time to code most of the time but does code reviews and attends meetings with architects and others and works on aligning stars so things happen. Then also has to stop stupid ideas or propose how to better solve a problem for a customer.

But as I read GP post again it seems he might mean actual management.


I'm unfamiliar with the term GP.

However, what you've said is exactly right.

A tech lead codes less, but they directly contribute to producing the engineering artifact.

They are an "actual contributor with direct responsibility."

However, all meetings that the tech lead has with people OUTSIDE the team, is communication overhead to be minimized. So a tech lead combines code review, external communication, and direct work on the engineering artifact.

Communication overhead is the problem. Architects, people managers, are all overhead roles. They don't "actually contribute" to the product.

The more strictly define it, it's not actually about the "fulltime role." Maybe the senior engineer doesn't change their job title, but their schedule becomes less about coding, more about communicating.

It's about minimizing the amount of external communication that needs to take place per code deployment.


might i suggest it's automating the communication (tests, other)


Management and architecture roles are essentially interchangeable.

"People who go to meetings and don't directly contribute to the product." Aka communication overhead.

I don't think it's about job titles.

It's simply that, senior engineers who COULD do direct work on the product, instead devote time to communication overhead.


Not sure if it's a fair analogy.

A construction manager doesn't use hammers.

The degree to which software is solving problems, your analogy is valid.

The degree to which it's mechanical, I have a point.

I'm managing a team right now that's making a simple mobile app. 100% off-the-shelf everything. There is not a single 'algorithm'.

Flutter and Django by the book.

With Django, the whole thing is a giant 'convention' - it's like, there is no 'code'. Just 'templates'.

I can't communicate how much confidence this gives me.

In the areas where the team has to make decisions ... that's where all the problems are.

So for most of our activities, we design features, work out the UI, give it to the designer, review, and hand it off to the devs to actually 'do'. The code is mundane.

This kind of scenario is quite common in software, perhaps more common than not.


"I used to lay bricks and now im a general contractor"


thank you


>We don't hear phrases like "I used to read and write English, but since I moved to management I have stopped"

Lots of people say they stopped reading books after highschool. It's also sad.


I work in a FAANG at this level - but one of the expectations is "you lift the others up around you" rather than just handing out tasks, and the other is "you write the hardest parts of the system".

For me this can be something algo heavy with tight perf requirements, or setting up initial abstractions and architecture - it's a mix.

But I'm only coding about 30% of the time.


> I was actually offered a position doing exactly that for a rather successful, nonFAANG valley company. Literally 'decide how to solve the problem, then hand it off and never think about it again.' It sounded awful for everyone involved.

I did that for a while, and it was exactly as awful as you would think. The implementers resented the implication that they couldn't do research and make interesting decisions, and I hated spending my time describing designs instead of implementing them. The design documents I wrote were DOA, since the implementers neglected to update them when they deviated from them. And of course all the normal dynamics of software development (shifting priorities, changing product requirements) were happening at the same time that I was handing off "finished" designs. It was a farce, and I quickly left for a role where I could be hands-on again.

The right way to support juniors is much more fluid, because different people need support in different areas, and you have to give them chances to stretch themselves as well as preventing them from coming under too much pressure.


This is why I've always avoided moving into managing things. I'm a tinkerer, not a leader. I enjoy that and wouldn't enjoy the other. I may have done myself out of a bit of salary over the years, but I'm doing more than well enough as I am and my skills/attetudes/other seem generally respected.


This is also not how to solve problems - at least in my experience. I must write some code, usually completely scrapped, to understand how things will fit together. PoC code is the next stage after white-boarding for me.


"Senior Engineer" isn't a single job definition, is it? There's senior in the technical sense and senior in the quasi-management sense.


What’s the better way, Assuming the goal is for the junior to do most of the work?


Extreme division of labor, did it work as far as pushing the product forward?


This was not how I interpreted parent's comment. I thought they said that the hard part of coding is figuring out what to code. Actually writing it is the quick and easy part.


That's not a senior engineer's job.. that's a job that has scope creeped into many roles: project management, lead developer, project owner and a trainer. Are you doing QA and managing the production servers as well?


That's exactly how we define senior engineer at my company. There's a lot of people able to produce code - even really complex and sophisticated. But you'll waste a lot of time if you're working without proper context and well organized team. From the other hand - you can't establish good context and organize team well without knowledge of what it takes to put requirements into code - therefore most efficient way is to blur a line between some roles.

People who realize that, usually quickly become essential to organization. Rest is typically easy to replace.


Its all fine and dandy, but expect ppl will be willing to do this kind of work if you pay them few salaries worth of money.

If you want senior developer that will work as an architect, team lead, product manager etc.

(S)He better make 400k per year or more.


It seems like you assume, that they are tasked more than usual, but in reality this the way we want to work, because it's simply better. There are many seniors on the team, so it's not like one is expected to fully cover multiple roles. It just means that you aren't bound to single.

We have nearly zero rotation within senior staff, and some actually came back from other companies, sometimes due to pure frustration with how little impact they had on what's happening within the project. Simply most of the team considers it the way it should work, not additional chore.

About 400k (USD I assume) - where I live you could easily get really nice house with that in less than a year, with zero mortgage and without limiting your daily spending too much.

I'm happy for you if you have ability to earn that, but with such expectations and altitude - no wonder US corps are so eager to outsource to our side of the pond :-)


> About 400k (USD I assume) - where I live you could easily get really nice house with that in less than a year, with zero mortgage and without limiting your daily spending too much.

That's not the way to think about it when looking at leveraged assets. A house where you live.

> I'm happy for you if you have ability to earn that, but with such expectations and altitude - no wonder US corps are so eager to outsource to our side of the pond :-)

Are they? Last time I looked there was a lot of Europeans looking to move to America, but the reverse was exceedingly rare.


Yeah I agree, and the most ridiculous part is that they still have architects, team leads, product managers etc who now really do nothing other than riding the gullible software developer


We don't have architects :-) Most of bigger topics is decided by communicating between seniors in various projects. (it's fully transparent, anyone can see and contribute).

We have some PMs/POs in order to cover communications with customer - there's legal part to manage, market research, and there are difficult customers, which unusual reporting needs that have nothing to do with technology, so everyone is happy to leave it to them.


Sounds very similar to how we operate as well.

I feel it really helps being part of the customer meetings from the start, so you have a good understanding of the customer's context and needs while designing and implementing . It also allows us devs to raise potential issues much earlier, which can lead to much better solutions than if it's discovered late in the project.


In general a "senior engineer" is simply a dev. with about 5+ years of experience so that they are no longer junior and no longer need hand holding.

There's usually no, or very little, project management, lead work. At least that has been my experience everywhere I worked in the last 20 years...


I don’t know why you think that’s true “in general”.

If you have a team with a senior member it, in general, what would you expect that team member to do? How are they different from the other team members?

They are not senior because they’ve been there the longest. That’s stupid. That just makes them the oldest worker, not the most senior.

I expect them to have more responsibility for making sure the team delivers whatever it is they do.

If the team does tax forms, then they’re more responsible for ensuring the forms are correct, and timely, than say, a junior accountant on the team.

If the team produces code, they’re responsible for making sure the team produces the correct, maintainable code in a timely manner.

If the team is two people, maybe that means “write lots of code”, but if it’s six people, it’s much more likely, in general, to be, making sure the rest of the team is doing the right thing at the right quality. Mentoring. Checking. Meetings. Documentation. Protecting the team from politics.

Why? …because seniority is about responsibility, and if all you do is write good code, you’re not accepting the responsibility for anyones effort but your own.

Obviously, it’s not solely your responsibility; you share that responsibility with the other team members depending on seniority. A team lead, for example, and other seniors.

…but, I don’t care how many years experience you have: 0 responsibility for anyone other than yourself makes you a junior developer.


I say 'in general' because that has been my experience everywhere I've known over the last 20 years.

"Senior engineer/dev" is a title, this is not the same as being the 'senior member' of the team.

In general (as per above) the title is given to devs with about 5 years of experience. It does mean more responsibility, but mostly in terms of being able to carry out dev. work without 'hand holding'. So I would say that a "senior engineer/dev" is simply a dev. who has reached full capability as a dev. Sure you are expected to help "junior engineers" but that's not the same as having leadership responsibilities. It's more like in a cop movie when the experienced guy is assigned a rookie as partner.

In larger organisations (which is most of my experience) "senior engineers" are not the senior members of technical teams, which are "principal engineers" or above.


> "Senior engineer/dev" is a title, this is not the same as being the 'senior member' of the team.

I call bs.

I should probably say, “that hasn’t been my experience”, but I just flat out don’t believe you.

So, let me get this straight. In 20 years, you’ve found that senior developer is the role given to developers who have no responsibility above what a junior developer has, they just cost more?

Rubbish.

It’s a sliding scale; I bet you those senior developers have other responsibilities; code reviews, mentoring, deployments, prod support, ci/cd.

I have never worked in, or heard of, an organisation where senior developers just get to sit around, goof off and write code all day… but get paid a senior developer rate.

Sounds great!

What organisations are these? I want to work there! Hook me up.

I think you’re suggesting that responsibility for team and project management is like a step function; you get none until you’re a principal.

That certainly hasn’t been my experience.

I honestly just have to say I flat out don’t believe that in 20 years at large organisations it has been yours either.


In my experience, you are correct. In the wrong organization, you are eventually given so many "other" responsibilities, that you barely get to do any hands-on development work. You'll be doing code reviews, support, and documentation all day.


This is all part of "development work" and not tasks that mean you're 'senior'...


My point is the more senior the less hands-on you become. Juniors are not reviewing other peoples PRs 70% of the time.


> Why? …because seniority is about responsibility, and if all you do is write good code, you’re not accepting the responsibility for anyones effort but your own.

Exactly. Ownership of a certain scope.

This came out not long ago, and I love the graphic.

https://www.honeycomb.io/blog/engineering-levels-at-honeycom...


In my experience it's uncommon to tell people that "senior engineer" is a 'terminal level' because, again, this is usually not very senior at all and you might have issues attracting and retaining talent if you do that. Again, "senior engineer" is a title and although it has 'senior' in it, it is actually not that senior within the organigram of most companies.


Indeed great visualization. Definitely I'm going to show it to our future seniors.


You might be correct in that you more accurately describe reality, but it still doesn't make any sense, and you have to do some serious gymnastics with both definitions and language, which just further illustrate what a big problem this is.


> In general a "senior engineer" is simply a dev. with about 5+ years of experience so that they are no longer junior and no longer need hand holding.

Definitions differ around the world, but that is my definition of an intermediate software engineer, between junior and senior.


Not original commenter but same shoe: the answer is yes and yes. Small companies has their perks and drawbacks :) And my view on seniority is definitely skewed: I would say I expect a senior engineer to handle a large(multiple month) task, rip it to pieces, make a rift out of it and handling out paddles for the other team members while talking through the hard shit with other similar minded persons in the company. I'd also expect correct ballpark estimates.

What I do not expect is the business part - decisions, communication with the client, money stuff. I also don't expect responsibility for all of this on one person - this is where multiple roles comes to play as one person could break it's spline from all this.


This is expected at big companies as well.


Those are roles that Senior engineers are expected to be able to play, at least at my company.

And yes, devs do QA because we have no QA staff and Production Engineers manage the servers.


That's what I thought at first as well, but I eventually realized that's not the case. The senior engineer is mandatory to be consulted and involved quite actively for all those other roles to function correctly.


Wait.. you are supposed to only code?


You're supposed to deliver value. But your org structure is dysfunctional I'd say you're also supposed to say boundaries and flag when you're spending too much time on things outside your core role


Blurring the lines is a senior engineer's job (though at a larger company, this may not happen until the next level up). It's how they grow the scope of what they deliver to the company.

I would be upset if a senior engineer told me that anything in the path of delivering the final product was not their job. Defining 'job' here as understanding something and communicating/helping address/fix with the appropriate team.


Not OP but

> Are you doing QA and managing the production servers as well

haha... yes I am


I have switched on that, and figured out that there is more benefits of coding early than later:

- Users can give feedback ( theoretical exercises is hard for users that’s not technical)

- Get ownership over the product

-Easier to learn “what you don’t know that you don’t know” when you can iterate

-You have do so more development up front, but I believe you save it in the long run

Took me almost 10years to have enough examples that coding vs no-code activities should not be 20/80 but 80/20.


> The hard part is coordinating, defining the problem, planning for the future, and communicating the current status of the problem.

Yeah the hard part about software engineering is being the henchman of your corrupt manager.


I think the hard part would be equally split between: solving the right problem, ensuring the right flexibility, and making sure the customer actually will be happy.


It depends on tons of factors. Yes there are things besides designing and coding, and they can be harder depending on people and projects, but some strong links to the technical parts are always needed, in the sense that a manager is not an evolution of an engineer. Cutler wrote non negligible parts of NT. Linus doesn't "code" much anymore but still has an enormous technical impact.

Limiting the dichotomy to "coding" vs anything else is really weird because coding is just a part of design, and never should have been considered the absolutely only essential part of software engineering -- and that's the case regardless of your seniority. Otherwise you can as well be considered a kind of secretary with additional specialized skills. Even if you don't recognize the other activities you are still doing them, at least temporarily in your head even when working on a project alone.

And also there are tons of areas where code/design can still be tricky even for highly talented people. E.g. writing a state of the art compiler is not necessarily easy even when the problem has been well defined.


Where I live the worst of this is that one is often expected to be the head of a team as well as lead developer. Sitting in meetings with people talking about personal matters was not the reason I became a developer.

The purpose of having autonomous teams with agile development is to speed up development by defining responsibility between different roles in the team and to give the team a sense of ownership that will enable quick decisions and shorter time to production. Scrum for instance also emphasis the need for protecting the developers from the outside. But in this agile post apocalypse it seems like the result have become the opposite.


Yeah, I keep seeing this enterprisey top-down centralized micromanaging cargo-cult "digital transformation" so-called "Agile" rigid overspecified prescriptive process conveniently ignore the crucial SELF-ORGANIZING TEAMS aspect of the original manifesto.


I would have thought that job would be a higher level than senior. Team lead perhaps.


I'm like him, senior developer at a small company (~80 people total, 15 in software dev) and also do all those things.

Sometimes I am team lead or tech lead for a while, then purely dev for a while, depending on what projects we have and how we are divided into teams at the time.

It's not considered "higher", just a role that the seniors sometimes have.

The only job title above "senior x" in the company is Director.


That's perhaps because it is a very small team but it's highly unusual in larger organisations.

In general "senior engineer" is actually a rather junior level in the overall organigram. 5 years experience doing pure dev. and your title is bumped to "senior engineer" but you're still a 'only' a dev.


I’d expect team lead to be about senior level. A staff engineer should be doing more than that.


Yep. Coding is a team sport. When hiring, think more along the lines like you're drafting for an NFL team. How does the potential hire's talents complement the team's? Will the personalities and the way they work fit with the team's? You ignore team culture at your own peril and that's been true now for decades.


> The hard part is coordinating, defining the problem, planning for the future, and communicating the current status of the problem.

This sounds more like one person doing it all rather then "being developer".


This is corporate software engineering in poorly designed systems. Its not software engineering and is a completely separate job. The fact that its so prevalent doesn't negate that fact.


For me, it was that I thought that being a good programmer was that I write clean code with enough abstraction and indirection to make it future proof.

Boy I was wrong. Unless you’re doing the same thing you’ve done for years, you can’t tell the future. And just when your unnecessary abstraction is wrong, this the reason why we’re talking about tech debt in the first place. Because nobody wants to touch it.

Unfortunately, I didn’t have anyone to tell me this for the longest time. It wasn’t until I had to fix a bug for something I wrote in the past, and I couldn’t figure out just what I wrote.

Now I try to write dumb and simple (yet sensible) code until there’s a good reason for abstractions. I have nothing to prove at this point in my career.


Probably unpopular/heretic sub-opinion: if given a chance to change the past, I’d rather NOT read books like TAOUP and other books on the same shelf. Or at least wouldn’t take them close to the heart. Because instead of collecting my own experience and fitting it to my projects, I’ve invested heavily in these patterns and rules and “gems” and built something in me that I now have to destroy with advanced therapy (not kidding). Last few weeks I said screw it (as a self-forced experiment, because I get anxious without structure, abstractions, etc) and began to write “just code” without any pre-principles, only using programming methodics as an extreme measure. It’s like I’ve never felt better than that. Like walking new streets after you’ve been paralyzed for years. I write f--king code like I’m 15, it is easy and simple, time to deploy / market / test ideas is several times less. My boss gets happily confused being not sure what’s left for the next week, I hear it in his voice. I still have huge respect to Fathers like ESR, but… just make sure this knowledge makes you any good, okay?

I don’t think I’ll stop this experiment any soon. Maybe will reassess everything in a year or so.


IMHO there is a lot of value in learning the rules and then breaking the ones, that in your judgement does not apply to a given situation. A variation on Chesterton’s fence [1], or, if more literary inclined, the parable of the camel, the lion and the child [2].

[1] https://wiki.lesswrong.com/wiki/Chesterton%27s_Fence [2] http://nietzsche.holtof.com/Nietzsche_thus_spake_zarathustra...


The things is: they are not even rules, while some people think they are, but merely potential tradeoffs, interesting in some situations but that's it.

And it is highly conterproductive when people start to cargo cult them, and oh my god do they do. I sometimes wonder if the world would not have been a better place without e.g. GOF design patterns book, clean code, or SOLID, etc.


The more I learned about "best practices" the less productive I've become. I think it happens because I spend my mental energy on solving the problem in a way that fits those so called best practices instead of solving it however I can in the most robust way possible.

It took me quite a bit time to re-learn to write a code that solves my problems and is inelegant enough that I can jump on it and modify it as my needs change without thinking how to do it elegantly again.

I start to think that the tools must fit elegantly to the domain, the solution built with these tools can only then be elegant. Code can get hard to read and maintain when the way of thinking about a problem doesn't mach well the way of the toolset works. Things get messy when you try to think of ways to make your tools work in a way they are not designed to.

For example, there are some domain specific languages and frameworks for stuff like maths or physics or engineering that work the way the mathematician or physicist or an engineer will think about a problem. If you try to make the code made with these elegant in a sense that it's optimised and nicely structured from software developers perspective, it will be a huge mess and very hard to understand from the mathematician/physicist/engineer perspective.

Therefore, when working on something I find that the most productive AND maintainable code is the one that matches my thought process - no matter how many sins(like repeating myself or writing non-reusable) are commit. Also, optimisation for the sake of the optimisation is evil. Abstractions work well only when they are intended to match the mental model in the solution and are evil when they are made to optimize something(like making it re-usable for all kind of situations).


I scare the absolute crap out of a lot of Junior developers when I tell them this.

I once said, jokingly, that "Actually, abstractions in code are bad" to a Junior and I'm pretty sure he considered quitting on the spot.

But I do think a lot of this stuff, designed to give everyone a common code philosophy, is actually just resulting in a lot of unnecessarily overcomplicated code.


In the course of my career I've learned a simple rule for processing programming wisdom. I don't follow any programming principle or practice unless:

- I have seen value from it firsthand, or

- I'm working alongside somebody who says they have seen value from it firsthand, or

- I've read something that has persuaded me of the likely benefit, and I'm curious to see if I can figure out how to realize it in a real project, as an experiment.

The obvious upside of this is that I don't get ripped off by useless bullshit. I can't tell you how much "best practices" programming "wisdom" I ignored that later disappeared without a trace, completely unmourned, like ashtrays on airplanes.

But it also saves you from doing good stuff wrong, or good stuff in the wrong context. A principle or practice may be amazingly effective, but if you don't understand it, you likely won't get much benefit from it. And very often if you don't understand something, it's because you've never seen a context in which it makes sense. If you try it out, you should try it as a conscious experiment, and drop it if you can't make it pay for itself. Don't keep doing it out of a sense of duty, or a feeling that "good programmers do this."

I'll never be entirely sure if something is 100% bullshit, or if I've never seen the right context for it, or if I'm just to stupid to understand it. I have finally, thanks to my first time working in an OO monolith in a dynamically typed language, for the first time after more than twenty years in the industry, understood what some of the old OO design ideas are about. It's a shame that my initial exposure to those ideas was through many years of watching people misapply them to create unnecessary mess in Java services.

Maybe I'm wrong about a lot of other things. Maybe someday I'll be working on a project and a light bulb will go off in my head: "Oh my God, so THIS is what dependency injection is good for!" I can't know if that will happen, but I do know that I won't spend the rest of my career setting up dependency injection on every single project I work on just because other people regard it as a prudent and mature thing to do.


There's a level beyond that where you actually figure out how to write good abstractions. It's likely you thought you were making good abstractions and useful indirections, but you weren't, hence the problem. Concrete code with little indirection will be better than badly thought out abstractions that are incorrectly designed with unnecessary layers and indirections. That said, good ones, that are well done and thought out are worth their salt and can result in huge force multipliers for future runway.


I think parent’s point is that, you can be a great engineer, but also have limited knowledge of any given problem domain. What constitutes a good abstraction is driven in large part by that domain knowledge, rather than by your pure skill as an engineer.


There are abstractions that are not domain related, those exist as well, and I'd consider them more a part of product design, but software design can benefit from good abstractions at multiple levels and constantly do, but knowing how to use good abstractions and design them is very hard, bad ones or bad use of them will be worse than none.

For example, a schema is an abstraction. Choosing to have a strictly defined schema for your stored data is choosing to add a layer of abstraction. You could say simply store things as JSON, directly, serialize whatever object you have into JSON and be done with it.

Or you could choose to add a layer of validation and create a JSON Schema. Then set things up so the concrete data is created using the abstract schema definition in a way that also automatically set ups validation of the data using that schema.

Now sometimes this is overboard and too complicated for whatever you're doing, sometimes this is an amazing addition to a code base that really simplifies and make you more productive.

Edit: I'll give another more simple example as well, to show how abstractions are relevant at all levels.

Take a Player Class, where Player has a position in the world map.

You could go the concrete direct route:

    Player {
      String name;
      List<item> inventory;
      int health;
      int x; // x position in world
      int y; // y position in world
    }
Or abstract out Position:

    Player {
      String name;
      List<item> inventory;
      int health;
      Position position;
    }

    Position {
      int x;
      int y;
    }
This is more indirect and there's an extra abstraction, Position, but there are scenarios where it's much better like that, mostly if positions are often managed by other things or moved around and manipulated in similar ways be it for Players or Npcs or Cars, etc.

And there's scenarios where it wouldn't benefit much.

The solution where position is just concrete ints on the existing Player class is more concrete and direct, but not always the best.

Now a mistake I find mid-level engineers make is they'll read a blog or book saying it's much better to abstract out Position like this for x, y, z reasons. And then they'll do it for everything, they'll apply it to `name` for example:

    Player {
      PlayerName name;
      ...
    }

    PlayerName {
      String name;
    }
Doing that will make your code base a nightmare, future you and other engineers might hate it, why is everything abstracted like this? What's the point? What's the reasons behind it? What's the benefits?

You could conclude never to abstract anything ever again, and that would be better than the monstrous over-abstracted everything for no reason and often badly implemented at that, and that's an improvement, later on you'll get even better and learn the nuances, when, why and what abstractions in just the right place, the right amount, in just the right way, and it'll be even better.


I can almost guarantee you that any codebase with a "Player" class that looks anything like your example is a very poor codebase. It shows me they just didn't know where to start, so they started by throwing everything in there. Abstractions are always about the consumer of the abstraction, not the implementer. No consumer needs everything in "Player", so it's a terrible abstraction, and it's not just a data type or service or implementation of something else ... it's a God Class that hasn't earned its keep.

The focus on building abstractions is misguided. You don't build an abstraction because you have stuff lying around that implements things -- you build an abstraction because you need it to do your job. That's the only valid reason to ever build an abstraction: you, as the consumer, need the abstraction to do (or to define) your own job. As a consequence of this, most abstractions should be defined before they're implemented. It really feels like most people miss the point on this one, and that's why we end up with bloated abstractions. They're not about what you have. They're about what you need.

That means you should actually have lots of abstractions (assuming you have lots of different needs throughout your code), and they should all be simple, small, and clear. It should be obvious how to implement them, and obvious what they're used for. They have to be: that's how they were built to begin with.

(In fact, while we're at it, the focus on classes is misguided too. Why does everyone think you need to make classes that mirror common nouns in real life? Bad CS education?)

I could absolutely see "Position" (and, critically, everything in it) as something some service needs to do its job. In fact by simply looking at that class, I've learned a lot about how your game works: it's 2D (no Z) and probably tile-based (ints, not floats). We've made a decision: that's how position works in this game. How does movement work? Start that next -- it will use Position. Keep picking away at the edges, making useful decisions about the game, etc. Build abstractions only when you need them to answer that question: "how does X work in this game?" You will never get to the point where you build a "Player" class like that, which is why I can confidently say that a codebase with such a class must inevitably suck.


> I can almost guarantee you that any codebase with a "Player" class that looks anything like your example is a very poor codebase. It shows me they just didn't know where to start, so they started by throwing everything in there.

And then there is Unreal Engine 5's ACharacter class[0] :-P. I recommend checking the superclasses too.

[0] https://docs.unrealengine.com/5.0/en-US/API/Runtime/Engine/G...


First off, I'll say that popular frameworks optimize for being popular, which usually means they let inexperienced people make cool things quickly. This necessarily involves tradeoffs that end up being "walls" to more experienced coders. It's very very hard to let inexperienced people make cool things quickly without restricting power-coders. So "Unreal does it" doesn't necessarily mean it's the right choice for great code -- it only means it probably helps inexperienced coders make cool things quickly.

> Characters are Pawns [AI or human decision-maker] that have a mesh, collision, and built-in movement logic.

Indeed that's a combination of a lot of different responsibilities. Probably too many. Why built-in visuals (mesh) but not built-in audio? Why a mesh and not built-in particle effects? I'm guessing it's just because that is the combination that they found helps inexperienced coders make cool things quickly. I'd be really curious whether people who spend a lot of time tweaking their engine, or make games that are more complicated than just Another FPS, actually use that class much. I suspect they either don't, or they have several similar varieties of their own, which they sorta switch between as it makes sense and then go "Dammit, I wish we had made this an ACharacterTypeSeven, not an ACharacterTypeSix!!"

Of course there are times when you combine responsibilities together into larger objects, but the trick there is to always accept that this is just one projection, one perspective on the entity. If you start to think of that ACharacter object as the character, you'll have problems. It's an arbitrary boundary. When you come up with a cool idea to, say, have your character split in two parts with independent motion before merging back together a few seconds later, is that two ACharacter instances or one? You've duplicated some parts of it, but not others.

"But dude, YAGNI! Don't try to predict the future" you say, missing the point. I'm not saying restructure your code just in case someone wants to split characters in two later -- that's YAGNI. I'm saying throw what-ifs at your code to see if it holds together as a sensible concept right now. You future-proof your code by making sure its concepts are clean, independent, and composable, not by trying to predict the future. My character-splitting example is not an example of something we should plan for, but rather an example of why the concepts may not actually fit together that well. When I look at ACharacter, I don't see something that's composable -- I see something that's already composed for you, and if you want a different composition, it looks like a pain in the ass. That tradeoff makes sense if your main goal is to help inexperienced coders make cool things quickly, but it does not make sense for the codebase you rolled yourself.


The technique that you have described on viewing the simple object combinations as "projections" is an excellent one; composability in your system emerges from efficiently selecting and combining these projections into the desired combination. At different points in your system you take just take different projections. Cross-cutting concerns are a breeze.

It actually all starts to feel like.... SQL! State is stored globally in a defined schema and queried as needed by the system.

But you can't do this if your compositions are preordained from on high by a rigid class hierarchy, the data is crystallized into the "blessed" projection and that's that. It's analogous to your SQL queries being constrained to solely static views. No JOIN. No GROUP BY. No WHERE.


> So "Unreal does it" doesn't necessarily mean it's the right choice for great code [...] "But dude, YAGNI! Don't try to predict the future" you say, missing the point.

Actually i'd say the opposite, "Unreal does it" indeed doesn't mean it's the right choice, but that "Unreal does it" proves that in practice that stuff doesn't matter - Unreal is a codebase going back decades and yet it is as popular among developers as it ever was (some developers even throw away their own engines to switch to it).

So while these topics can be amusing to read, in reality they are bikeshedding of little more importance than using spaces vs tabs or where to put curly braces and how that affects diff tools.


I wholly agree with you and your commentary here is one of the most profound things I've read about software engineering in a long time. But, to play devil's advocate,

> I'm guessing it's just because that is the combination that they found helps inexperienced coders make cool things quickly

Is not "making cool things quickly" the essence of enterprise programming? Sure, you can make the cleanest, most perfectly abstracted code for yourself when the requirements are well-defined and unchanging, but that's not the environment you find in business. One might contend that such a combination is the optimum for enterprise programming/making cool things quickly.


> Is not "making cool things quickly" the essence of enterprise programming?

I think it's not. I'm not sure there's anything quick about enterprise programming. If you're cynical, the essence of enterprise programming is selling absurdly expensive software to clueless senior leadership that will never use it. If you're optimistic, the essence of enterprise programming is being a good data steward while elegantly handling the needs of a lot of different stakeholders and interfacing with a lot of different systems (some automated, some implemented only in brains).

In game programming, if you can't figure out a good way to get the camera to work in one particular level, you just scrap or redesign the level. In enterprise programming, if you can't figure out a way to import a particular Excel format, you could seriously harm the usefulness of your project or even lose a contract. You have to "get it done", and there are a lot of "its" to get done.

When I say "make cool things quickly", I mean that there are tradeoffs between having high velocity in the beginning (standard templates, pre-defined assets, content management systems, implement the whole thing in Salesforce) vs. maintaining that velocity through the lifecycle of a potentially very long project. I claim that one of the things that makes popular frameworks popular is because they tend to heavily prioritize the former over the latter. That is great for going from 0 code to shipped quickly, but it's the wrong choice for 5+ year projects like you see in enterprise.

In fact I think one of the (many) things that poisons modern enterprise programming is the emphasis on tools that get you going quickly, rather than tools that stay loyally by your side through the whole project lifecycle. MongoDB is quick and easy to set up, because you can just throw whatever JSON objects you want in there, without spending all that time worrying about "schema" (I do think people spend too long worrying about schema, but the answer is not to abandon it -- that's a whole other subject). But you still have a schema! It's just that now you don't have a dedicated tool to help you with it, and as your needs and data change, you're the one responsible for keeping it up to date. It seems very easy to get mismatched or out of date JSON objects in there and very hard to clean it up (although I'm no expert on JSON databases). Whereas SQL Server or PostgreSQL will support your changing schema very well throughout the whole project lifecycle. If you took over a 15-year-old project, would you rather it had been using Postgres or MongoDB that whole time? I know I'd prefer Postgres.


I don’t disagree with anything you’ve said, but just to the specific narrow example of UE’s ACharacter: there’s nothing stopping you using APawn as your “player” and composing the collision, mesh, movement and functionality as you please - in fact, I suspect the majority of people using Unreal Engine for anything other than toy projects would do just that. I think the “pre-composed” ACharacter exists mainly to help with quick prototyping.


> They're about what you need

I've no experience with game dev, but in other areas of development what you need is often not known ahead of time (which I believe the parent is trying to say). Operating under those conditions makes the Position abstraction somewhat arbitrary (until it's obvious it's needed by other parts of the system). Aggressive refactoring and robust testing are necessary when operating under these conditions.


> what you need is often not known ahead of time

Well that's kind of the point of software engineering, isn't it? Actually typing code is only a small part of software engineering, and if you don't know what you need yet, then you're probably not ready to write code. That doesn't mean you're not being a productive software engineer! It just means you're still working toward that point.

I could be more clear about "what you need" actually means. Let's say I sit down to (A) prove the four-color theorem, which says that no more than 4 colors are needed to color in a 2D map with no two adjacent regions having the same color; (B) write a function to color a map using as few colors as possible. Before I can start on the meat of it, I have to decide exactly what I mean by "map". Anything that (A) relies on my proof or (B) calls my function is going to need to turn their data into the kind of "map" I'm working with.

Oh, I know what a map is: it's an ArcGIS Pro 10.7 Geodatabase with a Polygon layer! Hand me one of those and I'll assign a 32-bit ARGB value to each polygon. What? You don't have ArcGIS Pro 10.7? Well, sucks to be you. Obviously it has to be that, since I need insert esoteric proprietary feature in order to color it.

Hmm, well, okay, maybe I don't need every feature of ArcGIS Pro 10.7 Geodatabase Polygon layers. In fact, I really just need a list of regions and how they're connected. Do I need literally every wiggle-waggle of every border between each region? Well, not really ... in fact all I really need to know is which ones touch, and which ones don't. In fact, it turns out what I need is a Planar Graph. It probably shouldn't have any loops (nodes connected to themselves) either. A loopless planar graph. If you hand me one of those, I can color it for you -- in fact, I'll just assign each node 1 through (up to) 4 to represent the colors, and you can do whatever you want with that information, rather than me picking the actual ARGB values for you.

The reason I associated it with a math proof is because it's more clear in that case that reducing the preconditions on the objects you accept increases the power of your proof. Proving something interesting about all multiples of 5 is less powerful than proving it about all integers, and less powerful than proving it about all elements of an Abelian Group, etc. Writing your code to work on any loopless planar graph is much more powerful than writing it to only work on <Random Complex Proprietary Format>.

And we settled on "loopless planar graph" not because we had a bunch of graphs lying around -- we probably didn't! We probably actually had a map representation we've used elsewhere. We settled on "loopless planar graph" because that is the minimal possible description of the objects we can run our code on. That's what I mean by "what we need to do our job". That is the birth of an abstraction.


This is a great comment and should be read carefully by anyone reading the comment section on how to learn more. Very well explained!!


But the midpoint of that would be to type-pun PlayerName, because while a Playername is a string, not all strings are playernames and it's good to be able to see in the code base what types are being passed around.

One of the mistakes I really hate seeing people do in typed languages is not using types for distinctly important data sets - a good example is when you have ciphertext and plaintext being passed around. At an application level you want to be really sure that you're going to be accepting and using ciphertext in the parts that need it - even if they're both technically valid string types.


>For example, a schema is an abstraction

Meh, I'd say strictly defined schema, moving database consistency logic to DB etc. is an example of a bad abstraction in most cases I've seen it used. The idea sounded really good when I was a junior, you can have data layer enforce integrity from all sources.

Except most applications are exclusive owner of the DB and it's schema - even in the microservice world it's one database per service. If I see other apps hooked up it's passive readers/exporting/logging/etc.

SQL databases still don't play well with being in sync with the repo (it requires specialized tools or extra care, which again usually means extra tools).

Database schema constraints are often crude and/or complex and don't scale well - it's common for people to avoid even rudimentary things like foreign keys because of what it can mean in terms of locking/ordering and write throughput. And using things like callbacks etc. good luck.


I agree that SQL vs git is not a perfectly solved problem, but I would argue that NoSQL vs git is an even harder problem where the state of the DATA does not necessarily match what your current code says -- you need to remember/comment that some fields did not exist in past data or run jobs to migrate the old data etc; it is doable but not obviously better than the state SQL is at.


Dealing with breaking migrations is hard with or without types, but I agree that having database schema catches this sooner and more reliably (analogous to say having API schema and catching breaking changes by diffing).

But from what I've seen using schema to enforce data consistency brings more problems than benefits.


A (json) schema is a specification, which is the opposite of an abstraction.


An abstraction is something that can be made concrete. For example in OOP, Classes and Interfaces are abstractions.

An abstraction has no material presence, so in code, concretions are the actual data in memory in their exact place and precise arrangements and linking and all that on a precise machine.

Source code is an abstract representation of a running program for example. Everything in source code form is an abstraction.

Generally an abstraction has gaps, it can't actually be run into the exact behavior you want on its own.

A schema is an abstraction, because schema isn't executable into the concrete running behavior you're trying to implement. A schema or a data specification (which is just another word for schema), is definitely an abstraction. It abstracts over the actual concrete data you will have at runtime.

Everything that is not in the final concrete running form it needs to be at runtime is an abstraction.

A specification abstracts the implementation away, it's an abstract idea of what you want, but does not specify the implementation.

Unfortunately the concept of abstraction is itself very abstract, but one thing a lot of people don't realize is how many things are actually abstractions.

Even a simple function is an abstraction. It can only run once provided actual values to its arguments, it is but a template abstracting the idea of mapping inputs to outputs using some logic. You need an instance of it with actual values for its arguments to be able to run it, a function with concrete values provided as arguments is now a concrete thing, but the function definition, i.e, the code for it is simply an abstraction.

You can further abstract over abstractions, a function signature abstract further the implementation away, to be filled in later.

And generally speaking, even a running program is simply an abstraction of reality, a simulation of something real, but this is when you start to enter product design, how to best abstract over the real life use cases you're trying to have a program that can represent, model and simulate.

Programming Languages and other frameworks often provide you means of modeling abstractions, constructs that can help you define your own abstractions. Those tools will vary from language to language, in some OO language like I said you're given Classes, static types, Interfaces, inheritance hierarchies, etc. In some functional languages you'll be given abstract data types, functions, higher order functions, type classes, etc.

I could keep going on, but I find the concept of abstraction itself is often misunderstood.

P.S.: You can easily argue a different definition, arguing semantics has no definite truth, definitions of words are just axioms we define. I believe it is more useful and beneficial to define abstraction as I just did for being able to better reason about and make judgement calls as to how exactly to structure and design software code. I'd encourage others to give it a try, attempt to rediscover abstractions how I just described, and you'll learn a lot in my experience, and you might become a better programmer out of it.

Just my 2 cents.


> actually figure out how to write good abstractions.

There's an element of no-true-scottsman in this argument. Most codebases that I've seen have excessive amounts of unnecessary abstractions. It's rare to see a codebase that has too few abstractions. You can of course make the argument that "they just weren't creating the right abstractions", and it's not necessarily incorrect - it's just unhelpful as a piece of advice. You can take any methodology - no matter how bad it is - and claim that any seeming faults in the methodology are simply the result of people applying it incorrectly. "No true scottsman would have created this abstraction".

Since the needle is currently pointing in one direction more often than the other, I think it's generally helpful to shell out advice that moves the needle in the other direction: advice such as "less abstractions is generally better".


There was a time where there was seldom any abstractions, people wrote in assembly code, it was as close to the concrete machine as you could be. It was painful, complicated, and doing anything was tedious, effortful and slow.

Abstractions were clearly needed. Higher level languages abstracting over the machine lower level details were needed.

Then there was a time where abstractions themselves were very simple, branching and looping were all just done with "goto", it was error prone, confusing, and made working with other people's code bases difficult. Abstractions were clearly needed, something to abstract over the lower level details of branching and looping and memory management with relations to those.

Fast forward to Java. Now we already started with quite a lot of abstraction, yet there were still times when things were more tedious then they needed to be, more abstraction was still needed, it led to the addition of Interfaces, the development of frameworks like Spring, the creation of template languages like JSP, the addition of code-generation tools like Lombok or API generation like Open API.

Once again more abstraction became hugely benefitial, delivering real productivity boosts and still helping to make things clearer, not more obfuscated. Even though it is true at each layer it becomes harder to understand how all these abstractions reduce themselves back to some concrete instance at the end of it all. But if you can trust in them, you need not worry about that, a good abstraction lets you forget and ignore the complex details underneath it, freeing you to focus on more of your higher level concerns progressively closer and closer to your real domain problem and away from the computer machine concerns.

Finally enterprise software reached a point where managing complexity got difficult, so people tried to promote best practices they had learned, basically ways to fit in more abstractions in certain situations that again benefited them greatly. There was a big push to advocate for "design patterns" and other judicious use of abstractions.

Lots of people, often mid-level developers, including me at the time, we went seeking for advice, while we didn't understand why, what's the need that drove this advice, what's the use that benefits from it, we took them to heart: SOLID principles, GRASP, YAGNI, inheritance, interfaces, composition, we took it all at face value and tried to arbitrarily use our limited understanding of them everywhere we could, religiously and impartially.

This frivolous misuse of abstractions yielded the plagued over-engineered, obfuscated, puzzle-like, code bases that a lot of enterprise software suffers from. Where the hell is the actual code doing the actual thing?

This had more senior engineer once again try to push some new "best practice", a new commandment to amend for the misunderstanding of the prior ones: "less abstractions is generally better". Or in other words, just use the abstractions more experienced people have already put in place, stick to your popular framework, follow its existing patterns, stick to simple usage of your programming language, and don't try to be smarter than you are, aka too clever.

This is great advice, I'm absolutely in support for it, and to some developers, they're not ready to hear the more nuanced version of it, it might lead them down the wrong path again.

But, my point is, good abstractions are really awesome, and by definition of what makes them "good" is that they actually help rather than hurt. There's countless examples of good abstractions throughout the history of software development. There's even so many more minor abstractions that everyone implements on a daily basis without even realizing that once again being better at results in better code, like simply choosing what the method will be and what the arguments and return value for it will be. Or choosing where the data will live.

So my point is, in my opinion, a senior engineer is one that knows about the "generally" part of "less abstractions is generally better". A senior engineer knows exactly when less abstractions is better and when more abstractions is better. Don't stomp your growth by once again being religious about a best practice and arbitrarily being against all abstractions because the best practice said to try to avoid them.


> Now I try to write dumb and simple (yet sensible) code until there’s a good reason for abstractions.

Good abstraction is a lot harder than most think. We tote abstraction's benefits as a reason for abstraction but fail to recognize bad abstraction and how it completely negates any would be benefit.

Too often abstraction (so called) requires a lot of research into how it's implemented. By the time you figure out enough implementation details to use the abstraction, you could have implemented it yourself quicker. It saves no time or effort but you're stuck with it because existing code is hard to part with.

If you're not reliving your users of a shit ton of implementation details then it's just not good abstraction. Code for someone who doesn't want or have time to learn the details. Often this will be your future self despite your current believe that your mind is a steal trap and would never forget how you did this stuff.


I blame this tendency to overabstract on the emphasis on top-down design / teaching methods. Beginners are taught to abstract whenever possible, and aren't taught when to stop. They don't see the reason behind it, and instead add abstractions dogmatically, dramatically increasing complexity in the process. When abstraction is used well it definitely decreases effort and increases flexibility, but all too often it's overused and results in "object-oriented obfuscation" instead.


An approach that has been working for me is to work bottom-up. Think about what basic functionalities you need, and start implementing them. Once they work properly, you can start composing an orchestrating them to larger units. That way you're less prone to build Babylonian towers of superfluous abstractions and indirections. Doing it in a way that's maintainable and extensible should come with experience.


I dont know ... way more common problem I see is unwillingness to abstract. Spaghettis are way more frequent than massive abstractions. Now the popular thing is move toward functional-like style, which leads to one stream of flow that is quite difficult to decipher.


> which leads to one stream of flow that is quite difficult to decipher.

By the definition of one stream of flow, this is literally easier to follow lol. One stream of flow as opposed to what? Several streams that branch and intermingle? Spaghetti is several intermingling branching streams which is very hard to follow. Following one stream is easy, you just follow the stream /shrug


No it is not easier to follow practically. Second, definition "stream of flow" is not "easier to follow". It is "one steam of code". It forces you to have all the ifs and for in head all the time and does not explains what those means.

Yes, opposed to named structures you can understand in isolation and then treat as units.


> you can’t tell the future

This is so true. It's tricky because some things are worth thinking ahead a little - but on average, I've learned that it's far better to focus on making things easy to change than make them directly accommodate future requirements but (often at the cost of immense complexity)... If the code is so small and simple that you can easily re-write it, then it is future proof and easy to understand and maintain, win win.

It's easy enough to understand this abstractly, but takes practice to know when to plan ahead and when not to - but a good oversimplification is that: if you know it's a future requirement, it's worth thinking about, maybe even worth making a space in your architecture/api/data whatever; If it's an unknown, just don't bother, try to keep the code simple instead so that you can adapt.


Well said. We are having a similar "fight" where I work now. Way too much premature optimization. I think the one concept that takes a while to really understand is to focus on separation of concerns. It is far too easy to get encapsulation wrong. People (imo) generally forget to apply the SoC test that tells them where to draw the line. We also have a lot of parallel development where re-factoring a View (Screen) could cause a rippling failure to other people's code. Too often they try to reuse everything instead of the important part of extracting the shared business logic into a XXManager and letting the Views just be dumb. Who cares if two Views developed in parallel might be too similar? It might change in the future and don't have to worry about 1 View doing 2 jobs.


There are also abstractions that are so good they are invisible until someone tries to reinvent the wheel without them. They are then forced to confront some reality which is more complex than they thought.

What seems to be the common thread is: "you are not as good as you think you are".

Never enough humility.


I don't understand this distinction between futureproof vs simple code. I would have thought they are the same.


They might be the same, but often, future proof code is written with a future requirement in mind, and extra "hooks" added in to allow easy addition of that future requirement.

For example, you might add an orm to abstract the db specific SQL, even though you only run off one database type, because it allows you to switch in the future.


this is exactly what I like in Golang


> Things like indendetation, formatting, naming – god forbid you did it differently than I would have.

I agree with chilling out about this when it's a human making the changes...except now that we have autoformatters we can actually be more pedantic about this than ever before :) If the autoformatter doesn't agree with what you wrote, bam, red CI.

> Everyone writes tests

A big thing I've learned over the years is that making your code testable often takes more foresight and experience than actually writing the tests. "What's the right way to test this?" becomes an important question when starting something new. And "Can we make it easier to test this?" becomes an important question when inheriting something.


This is why a lot of people don't test or claim that testing takes more time since "we spend half our time fixing the tests" - a lack the experience to design code that's testable.

I think it's also worth pointing out that up to a point, code that's testable tends to be more maintainable. This largely has to do with coupling and cohesion. Code that's hard to test in indicative of too much coupling and/or not enough cohesion.

This pendulum can swing too far the other way though. I hate when relatively straightforward code is made more complicated in the name of testing (often comes up when dealing with non deterministic sources of data, such as the current time). But this, in my opinion, is a comparatively small problem.

100% you should test private functions if it makes sense. Whether something is public or not has no bearing on the value you'll derive from testing. If anything, sometimes you can avoid real headache by focusing specifically on the private function (by definition, the private function has to be more focused than the public function calling it).

Also, we should probably all be fuzzing more.


>I think it's also worth pointing out that up to a point, code that's testable tends to be more maintainable. This largely has to do with coupling and cohesion. Code that's hard to test in indicative of too much coupling and/or not enough cohesion.

I feel like whenever someone complains that the code is hard to test, there's always someone who's quick to claim that it's just indicative of problems with the code itself, rather than an issue with writing tests. This rings false to me.

Making code testable is simply one aspect that you need to take into consideration when writing code, along with readability, extensibility, modifiability etc. If you remove one of those constraints, it often becomes easier to focus on optimizing for the other constraints. I don't believe that adding testability as a constraint somehow makes the code better - more often, it just forces you to write the code in a certain way which feels 'logical' when considering the constraint of testability, but makes it harder to optimize the other constraints. The feeling of 'revelation' when you end up needing to modify code to make it more testable is mostly just the constraints imposed onto the code.


In my experience people who learn about “testing” learn an inflexible TDD based version where everything has to be covered in unit tests, meaning everything has to become overly complicated so dependency injection will work instead of merely creating the only thing you’d ever want it to use.

If you’ll allow other things like integration tests as the first level of testing, things get simpler. Unit tests aren’t the only kind of test and in fact they’re the least good kind.


Ye I agree. Unit tests are fine for small things that can be tested in isolation. Like "reverse string" etc.

For more complex things I prefer integration tests. I.e. "program output tests".


Much of the time, once my code is the way I want, there's almost nothing to test. Any tests I write would just be an echo of the implementation itself. I think if you're using your static type system to its fullest and writing clean, "obviously correct" code, 80% of your tests are simply not necessary at all. It goes:

- Bad code: almost no tests.

- Good code: lots and lots of tests.

- Great code: almost no tests.


The type system does help a lot, but I find the tests matter more for the next engineer(s) who need to touch your code. Modifying code you didn't write and don't 100% understand while being sure you didn't break existing features is priceless.


You definitely have to write fewer tests in well typed languages like Rust, but you still need some tests! Maybe 1/3 as many as in something like Python, rough guess.


Testing is to ensure your code functions the way external users expect it to. There's a reason SQLite has an extremely robust test suite even if their code is great.

Testing that the internal interfaces / abstractions in your system is less important - which I believe is what you're getting at. Those tests are definitely echoes of the implementations. Often they're for other developers.


I don’t get what you mean about autoformatters. For me they run on save so it’s pretty hard to get errors from them. And I found the big advantage was not caring about formatting at all. I could write a long line and have everything rearranged and indented with a keystroke, and I didn’t need to worry about eg maintaining vertical alignment in the code or whatever.


The idea is that you run them on save, but you also run them again in CI in a "fail if you would've made changes" mode, so that if someone has their editor misconfigured or edits text with cat or whatever you still force them to run the formatter somehow before they can land their PR.


I still reminisce about the first big project I ever built. As an intern I was told high level what to do and given two years alone in a room working eight hours a day to figure out how to build it, zero guidance from anyone.

I didn’t really have any concept of importing libraries or using open source software, so literally every part of this huge and complex application was custom-written by me. All UI elements were custom designed from shape primitives, including a full time series graphing utility and a drag-and-drop programming language with compiler. The database was built by me using a custom binary data format I had invented and lots of disk IO.

It was extremely fragile, but it was also the most beautiful and functional app I have ever used to date. I wish it weren’t a Java applet so I could host it on the web again, but alas.


> it was also the most beautiful and functional app I have ever used to date

I laughed at this because of course a software developer would think the best-ever UX is the one they built in complete isolation with exactly 1 user (who happpened to also be the developer and designer).


I see your point, but I would give GP the benefit of doubt. Their app might be more beautiful and more functional than you think!


This seems to be the extreme opposite though of other posts with badly structured responsibilities. As a senior I will always rely on having a set of eyes or other opinions and viewpoints. Its invaluable!

I wonder if you went back to the source code now would you feel the same about its brilliance! :)


What year was this?


This is awesome. I often think I would be a better developer if I had something like this when I was first starting out. Alas, 95% of my experience is gluing libraries together to check off Jira tickets.


Can people even be "junior" anymore? Its relatively hard to find those kinds of positions anymore, or at the very least they are somewhat sparse.

Thought it was interesting that this month, much more of the "Wants to be hired" postings were juniors compared to what people were looking for in the "who's hiring" postings. Ive ctrl-fed the monthly job postings for 'junior' for the better part of year, and its getting less and less.

Why hire a junior when surely you can find some established OSS person? Its so competitive these days, as much as I want personally for this not to be the case (so I can score a job), I dont see why any company would want to hire a junior dev anymore. Interviews I get tend to play this out. Everyone likes, abstractly, the pedagogical ideal of bringing someone in to learn, but when its their money on the line, they go for the most experienced individual they can acquire.


Our company has a structured junior onboarding program. We take in a cohort once a year and they go through a 12-month program that happens while they are on the job in their various teams, designed to help them continue to grow their overall developer skills and also learn how things work at our company (we're a pretty big enterprise).

This was sort of inspired by Facebook's internal bootcamp idea, but it also grew out of the realization that we were not set up to take in juniors without some kind of structure that ensured they were provided guaranteed support to keep growing into their roles, since the teams across our department (of ~500 people) have such different focus. This program exposes all of them to the different facets of being a front-end and back-end developer, no matter what team they are on.

The juniors have 2 to 4 sessions per week on different tech topics, with courses taught by volunteers from within the department who sign up to prepare the course and present it.

The program has a been a huge success (both for the juniors and for the more experienced folks giving the courses), and we are starting our 3rd cohort this month.

When we opened the job posting to take applications this year, we got 450 applications in 3 days, for 10 possible positions. I wish we could expand the program to take in more people but we've found that 10 is what we can handle for now.


Sounds like a dream program to me! I can understand why its so popular.


Small piece of wisdom I received when I was an intern:

"Never get sentimental with your code"


While this is indeed a very good advice, I learned how to cope with this a little better than “just don’t care”.

Mostly because I do enjoy coding, and am still sentimental with code I’ve written decades ago. The trick is to split that kind of code into their own libraries. Preferably open sourced. This way once your code has outlived its usefulness to your current project it can still live on, and be useful somewhere else.

For example one of the first things I’ve written as a jquery plugin that I really liked, I polished it, wrote some docs and demos, and in its day it was pretty cool (stayed on HN homepage for two days). I still got mentioned for it years later by people actually using it even now.

Even if I myself don’t find it useful at all, it’s still nice to see it is used by someone somewhere.


I’ve had customers request complete rebuilds of functionalities several times over, destroying weeks of effort each time… and I’ve never cared, as long I’ve clearly communicated my concerns beforehand.

I’m precious about the code only while developing it. The process is the fulfilling part to me, not how the end result is used. The approach has its drawbacks, I’m sure, but I find it very easy to not grieve over paid-for-but-unused work.


Applies to many things, Stephen King on writing: "You have to be willing to kill your darlings."


I get paid to write code, I couldn't care less if it's deployed or not.

If they pay me to write it, and it doesn't get used, that's their waste of money / time, not mine.


As long as the code is versioned you don't completely kill your checked in darlings.


> That’s why mentors are so important, and the team you work with is worth so much more than a couple bucks in your paycheck. Don’t accept a junior position where you’ll be working alone, if you can help it! And don’t accept your first role (or, honestly, any role) based on salary alone. The team is where the real value is.

Couldn't agree more.

Of course, with today's attitudes towards us olds, and the average term of employment as an SWE being about a year and a half, I'm not so sure how big a role mentorship plays.

I think that having folks be responsible for their code, from napkin sketch, to deprecation, is a great way to teach us how to do good code. That's been the case for me, for almost my entire career. A quick shufti through my code repos will show the results.


It's still baffling to me how short avg tenure for SWE is, in general. I feel that after a year, maybe, is when I finally start getting the lay of the land, meet some people, start understanding more of the domain and can start causing impact worthy of my paycheck. I'm now at 3.3 years at my current gig and I'm feeling like I have superpowers, if that makes sense. I think it takes time to be in a position to really do a good job and see some earlier decisions paying off (or not) and it's speeding up my learning and responsibilities a lot!!


In my 20+ years as a developer, I've always advised developers working under me to move around and not stay in the same place too long. After 3 years it's potentially time to start looking around..

It's not just about the money, you also want to acquire varied experience and see how different teams work and tackle different problems, and expand your own experiences.

You can even come back to a previous company some years later and probably come in at a higher spot than you'd have been able to get if you stayed put and tried to climb internally! I've seen this many times.

We called it the "zig-zag"..

You do make a good point about sticking around long enough to see the outcome of decisions and the successful(?) launch of products or projects.. That is important for sure, and it should impact the timing of making a move to a new thing.

That said, things are a bit extreme right now in terms of how quickly people change jobs, but as someone else said the way salaries have exploded, you can move to the literal same job somewhere else and get a 30% raise, so it makes total sense why people are doing it.

That may not last forever, so get your raise while you can! New opportunities will always be there.


It's totally possible to get that, within a company. I worked for a Japanese company for 27 years, where they rotated assignments as a matter of HR policy. They also enforced things like code Quality, formatting, process, etc., to allow easy changes of personnel. Heavy-duty mentoring (by some of the top people in the entire world) and training. There was a direct career matrix, and the corporation took very good care of its employees.

This did not come without cost. The overhead was staggering. Lots of what I call "concrete galoshes"[0].

Retaining talent is not easy. It takes a great deal more than just bucks. It requires a loyalty (true loyalty, not the "weasel" loyalty that corporations like to put into glossies), from the corporation, to its employees, and this needs to be inculcated into the DNA of the entire management chain; from the CEO, to the first-line managers. Like I said, it needs to start at the top. I seriously doubt there's many top-level folks that have the stomach for it.

I did it, under very challenging conditions, for over 25 years. When they finally rolled up our department, after nearly 27 years, the engineer with the least tenure had a decade. These were top-shelf, highly-experienced C++ image processing/algorithm engineers; not tired, sub-par, oldsters.

[0] https://littlegreenviper.com/miscellany/concrete-galoshes/


Yes of course there are different experiences and paths, I certainly wasn't implying that there was only one way.

The company you worked at sounds like it had a really good structure and really put in the work and time to help employees grow, and unfortunately that's not the case everywhere.

But I would say from my experience across many jobs, and from speaking with all my peers, consensus is that doing diagonal moves in your first 10-15 years on the job will land you in a better place than staying put that whole time, both from an experience and skills perspective, and from a role/seniority/salary perspective.


It really depends. In my experience, after about 2 years in a specific role you will start to develop expertise in your specific organization and tech stack rather than generalist capabilities. There are good reasons to stay after that (you love the team, could work in the domain forever, getting tons of growth opportunities/raises etc.), but you should understand the tradeoff. You may be trading off learning how to be an effective worker in general vs. learning how to be an effective worker in your specific organization and you may be making significant financial sacrifices vs. people that switch jobs.


As long as you'll be staying young, forever, you're sorted, bouncing around.

I have been around long enough to watch others that have bounced around, suddenly hit the "Boomer Wall," where they get ghosted by recruiters.

It's very sudden, and quite jarring. Absolutely terrifying. I hit it, but it wasn't the same shock as others have felt, because I hadn't been job-hopping.

Make sure to save a nest egg, folks. I did, and that's why I'm in a fairly enviable position (if you count surviving being treated as radioactive by tech recruiters as "enviable").


Not that baffling. Paychecks expand by orders of magnitude, when changing companies, and corporations treat their loyal employees like dirt.

It’s a systemic issue that needs to be dealt with in the C-Suite (which means there’s no relief in sight).


Mentorship is about more than writing "great code". I'm 6 months at a new place as an old. I've already imparted a lot of lessons about working with difficult team members, picking your fights, how to let go of "perfect", there are more important things than being right, etc. My team is already less stressed, more productive and generally happier. :D


Yes. Totally agree. My comment on code quality was just one tiny aspect of a large continuum.

Strong teams are critical to doing big stuff. You get strong teams through lots of “soft” skills.

And also keeping employees for multi-year stints.


> Good enough is good enough.

I’ve only been in my first role for about two months and this one hit the hardest for me personally.

I imagine it has a direct correlation with impostor syndrome, but I spend a lot of time writing things over and over again because in the back of my mind I’m thinking “is this the correct approach? Will they think I’m failing if it’s not how they’d do it?”

I consider myself lucky that I’m in a position with a great team that’s very helpful but I struggle to balance the line of when to ask questions - which often stems from not trusting myself to be to their standard.


I also had the problem of not wanting to ask questions and look stupid, and my manager told me to ask the questions I need to get the task done. Very simple and effective.

Also, instead of re-writing things in solitary many times over it would be a lot more effective to write something pretty good (In your eyes) and then ask for feedback on it. That way you can either validate that you're doing a good job, or start to learn what "good" looks like.


> I also had the problem of not wanting to ask questions and look stupid, and my manager told me to ask the questions I need to get the task done

my concern with junior engineers is when they ask no questions (they have weird hang ups or they’re completely off the rails) or ask the same questions over and over (didnt admit they didn’t understand the first answer, or they’re not writing things down).

any number of distinct questions is generally fine.


It's a hard one to learn in the first 5 or 10 years or so I think. For me the correct approach is the one that solves the problem, feels good enough for the user (with varying definitions of good enough depending on the business goals), and doesn't create a maintenance headache.

There is a large amount of room in there for correct solutions.


That’s actually great. You are playing and exploring, which will definitely lead to deeper skills. It’s not impostor syndrome, it’s just realizing where you are and putting in the time to improve.

It’s only an issue if you stress out too much over it, or if you spend too much time doing it.

Maybe schedule a time for this type of work, like 1h a day, or 3h. Really depends on the type of work you’re doing and what the circumstances are.


What I learned is that, there is no absolute truths in this line of work.

There are only tools and techniques, which can be applied depending on the context. Some of them fit 99% of times. Some almost never. None is written in stone and any of them won't make project good or bad on its own, let alone commercially successful.

You become senior, when thrown into a new environment, you can can say which of these will help to move forward, and you know that not because "everyone does that", but you truly understand what they will do to the project and the team.


That’s the essence of the post. Every “absolute truth” is more of a talking point that is relativized and leads to more insightful and grounded comments and learnings.


For me, one of the things I learned early on is that there are a lot of salesmen in the software world that want to sell you something. Whether its tooling, methodology, classes or whatever. Cargo culting is rampant and everyone wants to sell you their version of a hammer to nail down any problems with your software.

This doesn't make them necessarily wrong or not have valuable things to bring to the table but much like everything else you should be careful not to fall into the trap of assuming that it works for them, therefore it works for you.


Senior developer at 21 is part of the problem with software now. Companies only want to hire young people but those hires have very little life skill or knowledge outside of their field.

They arrive with vim for the latest and greatest shunning the tried and true. Yes architectures need to evolve, of course, but it isn't evolution. It is flavour of the generation with new trumping functionality and usability in many cases.

The author does mention the importance of a mentor and I completely agree with that. I think mentor should be a job title with extra perks, rather than someone who is 4 years older than you who knows where the best espresso machine is.

Grump.


I deployed by ssh-ing into a server and running git pull.

That sounds wonderful. Let's do that again.


There's a reason most deployments moved away from that:

- Sometimes you need more complex actions to update a service than just "update the code". Migration scripts, dependent services restarts...

- It's fine on one server, but the more you have the more difficult it gets

- Changes and dependencies accumulate on the server, which might cause weird errors or necessary configurations that aren't written down anywhere.

- Too easy to say "let me just modify this thing here in production" and end up with bug fixes that are only applied on certain machines and certain versions...

I get the "let's do things in a simple way again" sentiment, but it's not like the complexity is there because people like complexity.


> Sometimes you need more complex actions to update a service than just "update the code".

Yes, software that was updated with a `git pull` always had an script or executable that gets the environment up to date with the new version.

> It's fine on one server, but the more you have the more difficult it gets

Nope, it's easy on as many servers as you want. It's just a matter of running `ssh $x (cd dir && git pull && dir/update_environment)` on a for loop. There were plenty of tools to manage that loop too if you didn't want to keep the information on a simple text file.

> Too easy to say "let me just modify this thing here in production"

Well, that's a "don't".

> Changes and dependencies accumulate on the server, which might cause weird errors or necessary configurations that aren't written down anywhere.

And that is the one reason why this was abandoned almost everywhere, as soon as it became possible.


> always had an script or executable that gets the environment up to date with the new version.

Which means you’re writing deployment tools already. Going from there to an abstraction that provides the primitives you use in those scripts is a natural step.

> There were plenty of tools to manage that loop too if you didn't want to keep the information on a simple text file.

Indeed. But again, this shows how deployment tools evolve naturally from “git pull” to something more complex. It’s not complexity for the sake of it.

> Well, that's a "don't".

If only managing computers was as easy as telling people what not to do.


> Too easy to say "let me just modify this thing here in production"

This sounds crazy to me, like saying "it's too easy to say 'let me just poop on the floor right here' instead of going to the bathroom properly". What!? I can sorta vaguely understand why someone might think that's a little easier in the moment, but someone who has that little control of themselves needs serious help, if not a good old-fashioned kick out the door. That is not a technology problem, it's a human problem -- and a major one.


The reality is that those things happen. Technology must inevitably deal with human problems too, it isn’t as easy as saying “just don’t do that thing”.


I am curious at what scale do you justify using containers.

I was just messing around in AWS making a cluster left it over night (few days), I had a $10 bill. Which is fine but yeah, in contrast my person apps/sites have been running on a $5/mo VPS.

Anyway it's still cool to know/eventually use this somewhere but I'm concerned how you keep it fresh if you don't personally use it.

edit: I left it on longer than that (90+hrs) but yeah, other one was EC2 NAT gateway hours


I start with containers, even if we're deploying single binaries. Local repo of the prod environment is incredibly useful when you inevitably need it. GitHub actions/Gitlab CI build containers out of the box, and deploy to ecs/kubernetes/digitalocean/whatever straight out of the box.

I can set up a CD pipeline on a fresh project to AWS with github actions in about 15 minutes that I will never need to touch again until the app grows to a scale that I can't develop and run it as one person. For anything other than a literal toy project the payback period is roughly the length of time it takes me to make a coffee while I wait for an ec2 instance or fargate cluster.


What's your bill like? Or I guess it depends on each person's interest in the cost but yeah. Maybe it doesn't have to be expensive if you do it from a "raw" way.


"it depends", but mostly I don't care, it's a rounding error compared to developer time. Digitalocean lets you deploy containers onto app service so you could do it for $5/month there. On AWS you can do it for about $15.

> Maybe it doesn't have to be expensive if you do it from a "raw" way.

If you look at straight $$$ cost sure, but the moment you take any development time into consideration you want to build on abstractions. Sure I could run docker on ec2 on a t2 micro behind the free tier, but if it takes me half a day to set up and I make $50k/year, it will take almost a year of it being free to pay off the cost of running it in fargate. _This_ is where AWS saves you money, not on buying compute from ec2.


I see thanks for the insight, I'll keep this in mind, I did Fargate as an option but still don't know when to use but yeah, in due time will learn.


Fargate is excellent. It takes the machine management and host management out of running containers. If you run docker on ec2, you need to manage the scaling of the hosts and containers, and also the packing of both. With fargate you just say "I want 1 vcpu and 2gb ram, and scale at 80% utilisation". Ive worked on projects that were too big to be cost effective to run with that model but for personal projects and for my current job it's excellent.


I'll mess around and see if it's worth it. I just think it's overkill for my personal use case.


For a personal use case, digitalocean [0] + gitlab [1] would be $5/month, and it's pretty much idiot and bullet proof.

[0] https://docs.digitalocean.com/products/app-platform/how-to/d... [1] https://www.howtogeek.com/devops/how-to-build-docker-images-...


That's how my present office does things (more or less, not git but same idea), making every box a pet that only a small number of experts can keep running. Good for the experts, bad for the customers, and turning bad for us as a project because it's pissing off the people paying to keep the lights on.


I'll just repost these thoughts by Dave Cutler, designer of the kernels for Windows NT/XP(...) and (Open)VMS:

“I have a couple of sayings that are pertinent. The first is: ‘Successful people do what unsuccessful people won’t.’ The second is: ‘If you don’t put them [bugs] in, you don’t have to take them out.’ I am a person that wants to do the work. I don’t want to just think about it and let someone else do it. When presented with a programming problem, I formulate an appropriate solution and then proceed to write the code. While writing the code, I continually mentally execute the code in my head in an attempt to flush out any bugs. I am a great believer in incremental implementation, where a piece of the solution is done, verified to work properly, and then move on to the next piece. In my case, this leads to faster implementation with fewer bugs. Quality is my No. 1 constraint – always. I don’t want to produce any code that has bugs – none.

“So my advice is be a thinker doer,” Cutler concluded. “Focus on the problem to solve and give it your undivided attention and effort. Produce high-quality work that is secure.”

https://news.microsoft.com/features/the-engineers-engineer-c...


> 5. Everything must be documented!!!!

I mean it should

> Focus on automation over documentation where appropriate. Tests or other forms of automation are less likely to go out of sync. So instead I try to focus on writing good tests with clear language, so developers working on code I wrote are able to see how the project functions with working code.

This makes me uneasy. I don't want to pick on OP, because its a common enough opinion.

However both is best. I don't think people really read tests to understand code, normally because tests are so contrived and shit they they are a burden to be ignored.


> However both is best. I don't think people really read tests to understand code, normally because tests are so contrived and shit they they are a burden to be ignored.

I have read tests for this kind of reason, but usually in codebases without static types. I consider static types mostly a form of machine-verifiable and tool-friendly documentation.


Discussion at the time, 516 comments:

https://news.ycombinator.com/item?id=20124018


I think the point about senior job titles is good. I haven't inspected other industries, but I doubt that many places think sub-five years (or even maybe sub-ten) is a senior level of experience.

Admittedly, as the article points out, ours is a particularly diverse field, but if you're going to have a label for "person who's done this for quite a while" you'd imagine the time period would be longer.


>In reflecting on this first decade of getting regularly paid money to type weird symbols into my Terminal, I wanted to take some time to share some of the ways my thinking shifted over the years as a developer.

I think that's a very low take of what means being a developer. I am not getting payed for that, I'm getting payed for solving problems. Some easy, some hard, some very hard.


The meta-lesson here is that considering certain truths to be "absolute" is harmful.

The greatest asset you can have as a programmer is flexibility. Understand the tradeoffs (e.g. tech debt vs over-engineering, thorough testing vs shipping quickly) and navigate them intelligently, rather than dogmatically pushing everything to one side of the tradeoff.


One more thing not in the article is that sometimes, leadership or whoever is responsible for the roadmap, can FAIL to do their due diligence and you can end up a few months working on something meant to be "a great product" just to realize after it ships, that the customer just isn't buying it.


> Early in your career, you can learn 10x more in a supportive team in 1 year, than coding on your own (or with minimal feedback) for 5 years.

I learned 10x more coding C++ and Rust in my room, watching Twitch streams of people doing interesting things and watching Cppcon talks than many years of coding in other languages at work.

> Disorganized or messy code isn’t the same as technical debt.

Try fixing a production problem at 4 AM and having to read some messy code. It will make you think "why did something think it is acceptable to have this in a mission critical system?".

Try having to maintain a critical feature in a non-breaking manner and having to deal with some messy code. It is frustrating.

So, messy code is unprofessional. It's bad workmanship.


>> Early in your career, you can learn 10x more in a supportive team in 1 year, than coding on your own (or with minimal feedback) for 5 years.

> I learned 10x more coding C++ and Rust in my room, watching Twitch streams of people doing interesting things and watching Cppcon talks

Exactly! You had a whole team supporting you, asynchronously.


Off-topic, but what streams! Would love to follow along with some interesting Rust ones.


Mostly streams about competitive programming, security, OS development, programming language development and game development.

Streamers in these categories come and go.

On Twitch you can find the developers of the zig and jai programming languages, the developers of Factorio, top competitors from codeforces, and all sort of interesting projects.

Ocassionally there are university professors, security researchers, etc.


I see people get tech debt wrong. I've heard it called sorts of things. My current team thinks it's easy tasks, 1 pointers. I've seen it called sysadmin work that isn't business focused.

In my opinion tech debt is the result of a choice. Tech debt is the remainder of a solution modulo good enough. You choose to do something you consider less than perfect with the understanding that you're just delaying some of the work until later. That left over work is Tech debt.

For example, we need to implement auth on our web app, but let's hard code the permissions and do RBAC later. That's hard and not really valuable with such a limited feature set. RBAC in your web app is therefore Tech debt.


The point about over documenting is okay, but reliance on automation and tests is completely separate from documentation. They serve different purposes. The code is not (all of) the documentation. People who rely on code comments, API docstrings/comments, and tests as documentation are thinking of documentation only at the level of the interface, not the bigger picture: that's a junior mentality.

Also, documentation becomes stale and unreliable because too often too little attention is paid to it. It's not as fun as writing code, and it's arguably harder since the audience is either non-technical or has a different area of technical expertise.


> Loads of companies and startups have little or no tests.

Personally I haven't seen a codebase with no tests since like 12 projects ago, give or take. But your mileage may vary of course. Maybe I've been just lucky.


>So imagine my surprise when I showed up at my first day on the job at a startup and found no tests at all.

In my experience, this is the norm, not the exception. Especially in certain areas like 'web development'. Everybody talks about tests and nods sagely in their direction--but I've rarely encountered some kind of meaningful test harness.


I've never had my code reviewed. The majority of my code was written in the days of MS-DOS, Turbo Pascal, and eventually Windows and Delphi. I was the solo developer at a small firm in the Chicago area with one product, which I had written.

As a retired person, I think it's like skydiving, something I'll never do. Unless of course, something on my github takes off.


>I learned all that fancy Angular-specific JSDoc syntax. My code was always twice as long because it had so much documentation and so many comments

That's a very bad habit. Code should be self documenting. If you need to write comments to explain what the code does, you've done a poor job at writing it.


Comments aren't bad per se, but they should explain Why vs. How.


Luckily she didn't do COBOL or PL/I, or she would be fixing stuff 24/7. For me it was more copying others and don't try to come up with a better/more clever solution for something every single time. Sometimes the current solution is just good enough.


Really enjoyable read. I'm in a software mentoring role now and definitely going to share this with new devs - always important to recognize just how much you don't know and being too rigid in your thinking can really hinder things.


Previous Discussion from 2019: https://news.ycombinator.com/item?id=20124018


Wow. What a refreshingly smart developer. Somebody who is smart enough to question their own believes and learn from it. It is so rare to meet somebody like that. Kudos!


Cmon guys just write some damn tests. It’s not that hard


Its very easy to force ppl to write tests.

For example Sonar QualityGate on new code added to the code base.

Its hard to make ppl write tests that make sense and are worth running.


The keyword is “some”. In web dev there are things that lend themselves to automated testing, for example domain logic. There, it is often most beneficial and most helpful during development. Start with that, it won’t save time if you don’t.

Other things, like UI, I think are best tested manually and visually. This takes a ton of time but leads to the highest quality and confidence.


No truth is absolute, to begin with.


Does that apply to itself?


I deliberately skip the semicolons


This is well written.

There list should be longer.


This is like.. a list of things to avoid doing.


Wass good good with it .


Many good points and a great article overall, though one that i think is possibly most overlooked is:

> Having some technical debt is healthy


Bad developers and companies have little to no testing.

Yes, it's fairly common out there. Doesn't make it right.


Quote 1: "Next year, I’ll be entering my 10th year of being formally employed to write code. Ten years! And besides actual employment, for nearly 2/3 of my life, I’ve been building things on the web. "

Math: If 2/3 => 10, then 3/3 => X. X = 3/3 * 10 div 2/3; X = 30/2; X = 15. So the author is 15 years old and writing code since 5 years old, and during this time he was also employed. Unusual but not unheard to be employed as a child.

Quote 2: "I was 19 years old when I applied for my first technical job. The position I was applying for was called “Student Webmaster”."

Oaw there, we just calculated she's 15 so how was she 19 when first employed? Something's off on one of above quotes.


Besides employment. So like the rest of us, she likely coded before she started doing it professionally. I don’t think your “2/3 => 10” equation is supported by the quote. Seems pretty clear she started building things at 9 or 10, and got her first job a decade later.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: