Hacker News new | past | comments | ask | show | jobs | submit login
The Grug Brained Developer (grugbrain.dev)
1068 points by huimang on June 22, 2022 | hide | past | favorite | 374 comments



> sometimes probably best just not tell project manager and do it 80/20 way. easier forgive than permission, project managers mind like butterfly at times overworked and dealing with many grugs.

In my experience, I have to fight to keep my devs from over engineering their solutions and just get something going.

I'd love to work with a dev who's happy to think about how to most quickly deliver value and who's willing to help with the cost-benefit analysis for each suggested feature.

Problem is, for that, you need a developer who cares about and deeply understands the use case/the product. Many of the devs I've had to work with were more interested in building pristine codebases with clever abstractions and ability to scale to unnecessary numbers of users or bytes.


It's a question of incentives and accountability.

PMs aren't usually accountable when their shortcuts come and bite the team further down the line. Developers feel the pain instead.

PMs won't be honest with the business that they sold an undercooked product. Need to suddenly scale up that "pragmatically" designed database? I know in my heart that too many PMs will _never_ manage expectations with their superiors if they can harangue developers into overtime, out-of-hours alerts or death marches. It’s asymmetric risk.

Don't take that personally. I’m sure you, individually, do the right thing. But my experience is that PMs / POs / team leads as a group can be bad actors. It's just the way incentives are structured for middle managers. By the time the problems of your "pragmatic" architecture have emerged, your typical PM / EM / team lead will either be working on another team, doing another job, or talking about "tech debt" as "just one of these things", a mysterious natural force, visiting your team stochastically like a freak summer snowstorm or an outbreak of measles.

_That_ is why developers are cautious. Do you _seriously_ think that you are the only person in your team who understands "commercials" or "cost benefit analyses"?

Experienced developers do, which is why we've learned the best strategy in the long run is to avoid offering too many concessions to people without skin in the game.


Hi. Former engineer turned PM here.

Wow, you're spinning some wild stereotypes here, and while some are based in truth (as are all good stereotypes), I'm going to take issue with this:

> PMs won't be honest with the business that they sold an undercooked product. Need to suddenly scale up that "pragmatically" designed database? I know in my heart that many PMs will _never_ manage expectations with their superiors if they can harangue developers into overtime, out-of-hours alerts or death marches.

While I've certainly seen incompetent, bureaucratic and/or poorly incentivized PMs, I don't think I've ever met one who wants to throw their developers under the bus to get a job done. I'm not saying it doesn't exist anywhere, ever (big world out there), but you've made a couple faulty assumptions:

1) A PM is not "a middle manager". It's generally a non-management position, despite the title. The classical "PM" has to use soft force for everything, and only gains authority through a history of doing the job well. We're constantly managing expectations in every direction, and "up" is just one more.

2) Even if we were "middle managers", those amongst us who have been working for a while realize it's bad practice to leave a trail of burned-out colleagues behind us, due to a history of bad decision-making.

I'll add a third, specific to my own history:

3) one reason PMs might not "care" about scaling up that database, is that it almost never needs to be scaled. Seriously.

The engineer side of my brain wants to optimize everything. The PM side is always having to remind that side that most of my engineering life was spent in a cycle of useless premature optimization. The war continues.

Anyway, it might be good to talk to your PMs and not assume they're evil villains. And if I'm wrong and you're working in a truly Machiavellian hellhole...you should look around. I hear it's a pretty good job market.


> Anyway, it might be good to talk to your PMs and not assume they're evil villains.

I've always thought PM's to be more or less aliens with a ray gun tapping at their watch. Communicating with them goes nowhere since they don't empathize with your point of view. The only thing they're concerned with is when something's going to get done. The only form of motivation is threat of existence in the company.

> While I've certainly seen incompetent, bureaucratic and/or poorly incentivized PMs, I don't think I've ever met one who wants to throw their developers under the bus to get a job done.

Perhaps, it's also possible you've never worked at enough shops to see it.

> one reason PMs might not "care" about scaling up that database, is that it almost never needs to be scaled. Seriously.

You're not the one going to be called at 4am when postgres takes a dump. If you volunteer to cycle into the on-call hours, I'd feel more compassionate for your point of view. Until PM's do this, I decline to see it that way.

I guess I feel like the PM is worthless position unless the PM is writing code along side the team, and can fully appreciate the technical problems -- and more importantly -- offer technical solutions.


A good PM is worth their weight in gold, but the problem is that the qualifications for a good PM are very hard to make objective. This leads to the role becoming magnet for power-hungry MBAs with a bean counter mentality who will tend to outcompete anyone with real knowledge of UX or engineering unless the company understands the danger of money/power-driven PMs and actively counteracts it by focusing on actual product skills.


I’ve found that only hiring technical PMs can be really helpful in this regard. YMMV depending on the product, but when it fits it’s great.


I've been the guy who gets called at 4am when postgres takes a dump. I've also been the guy doing the calling (well, not really as a PM, since again, we don't generally have the ability to tell anyone what to do at 4AM, but I digress.)

> Communicating with them goes nowhere since they don't empathize with your point of view.

Can't speak for everyone, but again, I've been on both sides of the table, and I've never seen an example of this. I think it's rare.

> The only form of motivation is threat of existence in the company.

This is so extreme that I feel I must respond just for the sake of other people reading it: if your PM is motivating you via threats, something is very wrong.


> if your PM is motivating you via threats, something is very wrong.

Seriously? This is how 80% of companies work. In many of those companies, stack ranking is a very effective tool for running people out the door. It's often _baked_ into corporate culture.


If that's your experience, you either have been extremely unlucky, or you should look back on your own behavior.

I've had some problems with PM before, but never about threats, and that would go very poorly with most people I know.


In 2022 what's the leading cause of people leaving a company?

Toxic Management / Toxic work culture.

https://www.cnbc.com/2022/01/14/the-biggest-reason-people-qu...

Modern management have had what? 100 years to fix this now?


I think the grandparent meant that the PM is motivated by the threat of having to leave due to a failing project.


That's how I read it.


> I've never seen an example of this

Without knowing what your experiences are, as context, it's not possible to see the value in this. How many places (for how long) did you work as a dev?


> Former engineer turned PM here.

I think you'r experience as an engineer has biased your view. For example number 3 with your engineering experience you might think that but ultimately it's the engineering team that has to deal with the scaling not you.

Your incentives are to scale everything back to meet dead lines while the engineers incentive is to make every thing work so that when it goes live they don't get called into a meeting to be thrown under the bus. As the only one that actually produces something that can be criticised at the end of the day the engineers incentive is naturally to minimise that.

The problems that arise from this are not people problems, its not a problem with the engineers spinning stereotypes or pm being bad faith its a problem with incentives being aligned agains't each other.

Except that it's not a problem it's everything working as intended, the higher ups want pm and engineering to have it out with each other as they see that as checks and balances.


As mostly an engineer (though I've spent time as a tech/team lead and engineering manager in the past), I think you've missed the point.

Most of the things will not need to be "scaled" at all. Ever.

As an engineer, I mostly struggle to come up with a simple solution while accommodating all the crappy, leaky abstractions in the rest of the code: that's the hard part of the job. And 20 years in, I still wonder why people think it's smarter to introduce seventeen layers of abstractions for things that have one or at most two implementations, but that's what they do.


> I still wonder why people think it's smarter to introduce seventeen layers of abstractions for things that have one or at most two implementations

Tell me you are a Java dev without telling me you are a Java dev. :)

(While I know it's not just Java with this problem, my personal experience is that Java is the worst.)


> Tell me you are a Java dev without telling me you are a Java dev. :)

FWIW, I am not :)


Indeed, concern about how well your product might scale to handle high load and use of umpteen layers of abstraction sounds somewhat contradictory to me! (I'd also say most scaling problems aren't necessarily all that hard to solve, but having some idea of what limits might be hit fairly quickly if there was an anticipated need to grow the userbase is a pretty key part of building SaaS platforms. Even single user apps can suffer from scaling issues if suddenly they're expected to deal with far more data than anyone had bothered testing with. Allowing the engineering team, including QA, time to assess potential scaling issues is surely the first step.)


I think some harsh language was used (probably because of frustration), but I think they didn't mean to imply outright _malice_.

More like "incompetence" (eg poor people skills, planning skills, and often lack of technical understanding), followed by shallow human nature which gives in to survival instinct, which generally leads the PM to "blaming developers", so they they don't themselves get in trouble (and a lot of people have problems owning mistakes in general, but that's enough misanthropy for one post).

Also, they don't really _blame_ the developers by point a finger at them and shouting "he did it" like some childish display. They try to soften the blow (humans usually try to be decent; we just can't agree what decent is always) by adding an adversary into the story (like the technical debt, or hidden complexity).

Edit: Forgot to address this, but talking to a PM can often backfire, because you need exceptional interpersonal skills to have difficult those kinds of conversations with a PM. And if they're insecure and take it the wrong anyway, they can make you even more miserable.


Any engineer that moves away from engineering never really enjoyed or "got it" anyway, by using the "I used to be an engineer!" tag you're just trying to score extra points. It doesn't make your argument more compelling, cmv.


Check my profile. I was an engineer for a long time, and I enjoyed it. Been writing code in a professional capacity for multiple decades.

I became a PM because I wanted a new challenge while not being totally divorced from making things. It's a harder job, in multiple ways, and one of the things that makes it hard is convincing (mostly junior) engineers that I know what I'm talking about. There are many days when I want nothing more than to go back to the simple, closed-form world of writing code. Compared to dealing with people, even the hardest coding problem is straightforward.

I probably shouldn't be replying, but I've noticed too many coders hearing "PM" and flipping the idiot bit. I want to do something to fight back against that trend.

It's good not to make assumptions about other people.


Thank you for your pushback, seriously. This "us vs them" mentality, especially applied to PMs, is outright toxic and counterproductive.

I was blessed to be able to build a team of developers who understand business value and prioritise accordingly, who like building things for others and not just a shrine to their intellect [1], and I wouldn't want to have it any other way. It's amazing to work in a team where most challenges are product and market challenges, and the rest is just pragmatic technical considerations. The world people describe in this thread sounds like a self-perpetuating hellhole.

[1] although arguably it's much harder to build something simple but good enough and compatible with future changes


"I hate this 'us vs them' mentality, but I need to keep my developers in check and make sure they prioritise business value accordingly"

Is exactly the kind of paternalistic nonsense that developers have to endure.

PMs exist to shield us from the shit. I work with a great one who does this, trusts the team and lets us get on with it without introducing ridiculous process, but the vast, vast majority of PMs I've had to work with are utter garbage.


What do you mean "keep in check"? In my experience people caring about the product and value that it brings to the world and having real visible impact on it don't need "keeping in check". It is important to screen for that during interviews, yes, and it won't work in feature factories and bullshit adtech that no one actually cares about, but when it's smart people in a small team working on something generally good for the world it just happens on its own. There is nothing paternalistic about it.

Maybe when it's a large dysfunctional org, yes. Ideally PMs exist to facilitate, not to "shield".


> _That_ is why developers are cautious. Do you _seriously_ think that you are the only person in your team who understands "commercials" or "cost benefit analyses"?

I realized this attitude is common to all tradespeople, not just developers, but also HVAC techs, roofers, electricians, pretty much anyone who's long-term accountable for supporting a large, complex system.

More of an observation than anything.


> PMs aren't usually accountable when their shortcuts come and bite the team further down the line.

Years ago, was in a situation where I was working with a PO/PM set of folks (rotated around a bit, but a small team). Multiple times I would suggest X, and get "No, that's confusing, we won't need it, that will confuse users, we will never need that, etc". Then... they're gone, and the new replacements are asking "why don't we have X?"

Similarly, "hey, I need to do XYZ on feature ABC". Reply along the lines of "hey, don't worry - we just need the minimum thing up now - we'll revisit later. I will take responsibility if there's any fallout from this decision." That phrase was used multiple times over the 12-18 months.

Guess who's not here any more to answer any of the 'cut these corners' decisions? Yep - that product person who said "I'll take responsibility". Who isn't around any longer.

Many things that were assumed to be 'done' because they'd been discussed earlier were later discovered to be severely cut down or missing altogether.

What's strange to me is I've seen this pattern play out probably 3 or 4 times over the past... 20 years or so. I learned it certainly wasn't a one-off unique-to-personX thing.


Do you know what corners can't be cut in advance though? Sure, you can have a tiny post-seed team bogged down for a month rolling out Hashicorp Vault for everything because "that's how it should be done", or play forever with microservices, or roll out Bazel in a two person team writing Python, don't ship enough, and get out of business. Alternatively, they can be passing an .env file with secrets around, build a monolith, and deploy it with bash/make. It might be tedious or even dangerous at some scale, but which way would leave more time to create real-world value, hopefully somewhat captured and used to pay salaries?

The question of responsibility is a funny one. For every PM jumping ship and leaving developers with tech debt, there are tens of developers bogging businesses down with unnecessary complexity [1] and jumping ship with newly padded CVs. Do you think it's any better? In fact, what do you think is worse for the world?

[1] Too much complexity can kill a business much faster than failing to scale. Loads of unicorns were unstable and failing under load (failwhale!), but that's a problem that comes with customers, which means money, which means it's much easier to resolve than having no money and a beautiful scalable stack.


Honestly, this tends to be team dependent. In any organization, there is often some "group" who is pushing to ship. Likewise, there is often a group that feels "stuck" with any fallout from a delivery - and none of the success. This could be Product vs. Engineering, Engineering vs. Support etc.

Where this border breaks down varies from org to org. No one likes having the short end of a stick. Usually if the teams have more mixed ownership boundaries and feel "the win" of a launch, then this becomes less of an issue.


I'm sorry that you've apparently only worked with bad PMs.


Unfortunately, you don't have to work with bad PMs very often, to occasionally experience a year or more of misery due to their ineptitude.

Good PMs are a bit of a paradox. They are immensely impactful and bring value to every project they touch. Yet they are also far overqualified for what is often a thankless job.

I also have come to the view that even good PMs will rarely allocate the work that is truly impactful. Some of the best work of my career has been when I've gone off piste and prototyped something that has surprised my managers or disrupted the way a team thought they had to work.

You can't go rogue often, and you have to expend your political capital carefully. But you will never do the best work of your career waiting for someone to assign you it in JIRA.


I've had more job misery due to bad other devs than bad PMs.

I can negotiate with my PM. I can't negotiate with the bad dev who made every method call in every file depend implicitly/somewhat indirectly on 20+ things two years ago because "long argument lists = bad" but "dependency injection = good". That ship sailed and it still hurting me every time I need to add a new feature.


> Some of the best work of my career has been when I've gone off piste and prototyped something that has surprised my managers or disrupted the way a team thought they had to work.

When I came to this realization myself, it was both sad (losing faith in the system which I had, until then, trusted to allocate work effectively) and liberating (I can just trust myself and do what I believe to be best)


> Good PMs are a bit of a paradox. They are immensely impactful and bring value to every project they touch. Yet they are also far overqualified for what is often a thankless job.

In 2013 I went to a project management conference. I'd realised that as a developer I was stepping into this role frequently as I worked at a small agency, and I enjoyed stopping projects becoming disorganised messes.

Talking to PMs about what being a proper PM entailed, I heard too many say "if the project goes well someone else will take the credit, but if it goes badly for any reason you'll take the blame. And you don't have power, you can only ask up/down the organisation while at the mercy of politics".

So I decided "no thanks" and stayed a developer.


Best PMs I've had were former engineers. They aren't ALL great but at least when we'd tell them that to build a feature we have to lay new groundwork instead of blasting out a hack, they generally understand why that's a good idea.


I think it could actually be a pretty good strategy to optimize your job search for the degree to which you might be able to go rogue and get away with it.


I am pretty up front about being chaotic good and seeing that historically I add a ton of value when I am given some free rein to fix things or just make some (or a bunch of people's) lives easier, I often seek forgiveness rather than permission but - I am also willing to accept push back (always a hey, this fixes this and that, what do people think? not a "I made the terrible code perfect and you must accept this), I try to be clear about where my time is going is my own assigned work is going slower, not usually because I'm off over-engineering some internal tool. but because when you get a reputation as someone that can help people fix things, lots of your time is about unblocking others cause they trust that they can come to you (which levels the whole team up, but sometimes means individual commitments fall behind), and I am try to push for the whole team to have less fear and more freedom to have a day a month where they just work on whatever tech debt they want, or throw up a quick PR for discussion about something they see, etc etc. And I'm also clear that if the org doesn't see value in what I'm doing, that's their call to make, because I also know I can find one that will.

But in my experience the devs that want to /optimize/ for going rogue, are the ones you least want re-engineering your entire auth system overnight with code no one else understands and/or the most abrasive when it comes to arguing about why they are right. It's a gentle balance and being honest, kind, and collaborative goes pretty far....


What are the downsides to "going rogue" too often? Where do politics come into play here?


PMs promised the business X would get done because it was great. Now they have to explain developer Y didn't deliver that. Disappointment ensues and PM has to sell rogue work as also great (so why wasn't it planned instead of the other less great thing).

As someone already said, it's a thankless job and I empathize.

Signed: frequently rogue developer.


If your "going rogue" project fails, then you can be seen as not having done any work for X weeks. If your manager was against you working on that project--especially if they needed you to work on something else--then you will be seen poorly, and that can be rough for your career.


It's almost like there might be better alternatives than command-and-control structures when building software.


We are doing a democratic design by committee. I am fairly sure that will work out worse in most cases.


Is it democratic in that people vote for a solution? Or is it everyone talks about it and consensus is built around a solution? Because while the second has its issues I would take it over the alternatives any day.


> PMs aren't usually accountable when their shortcuts come and bite the team further down the line. Developers feel the pain instead.

Developers (and people in general) are bad at prediction the future. I've seen many times that a developer are solving a problems in more generic and extensible form than required, creating a complex solution in hope that future changes will be easy. But then future comes not as it was expected and refactoring is required anyway.

80/20 for me includes not solving problems you don't have but think you may have in the future. If you need to solve X and X looks like a special case if Y class it doesn't necessary mean solving more generic but bigger and harder problem Y is a good idea.


> Developers (and people in general) are bad at prediction the future.

Right now I can safely say that 80% of the times I foresee changes they actually happen. I won’t stop pre-empting stuff for the 20% I get wrong.


It couldn't possibly be that you remember 80% of the things you foresaw that happened vs. the ones that didn't.


I’m fairly certain it’s above 50%, otherwise I wouldn’t have that feeling. 80 just makes for a good number because we were talking about 80/20.

It’s also possible I have a selective memory, but I imagine I’d be a bit different if it often came back to bite me in the ass.


Similar here. That's not always the case, but usually after some period - 6 months min - I have as strong an understanding of the business needs as someone can have (with the exception of information which is explicitly hidden or withheld). From then on, my notion of what's needed now and in the future is generally pretty strong (and, to be honest, for many types of businesses, there's a lot of relatively generic plumbing and data needs that are evergreen).

I've been in software engineering for closing in on 30 years (first got paid for a small piece of freelance code in 1994!). I've been doing this longer than some of my colleagues have been adults, much less been in their positions in company X. Many items and needs are not that hard to foresee/predict.

Worked short term on a financial tool - dealing with mortgages, assets, etc. Field for recording "interest rate" would not accept negative rate. I raised it as a concern. "What do you mean? You can't have negative interest rates - that doesn't make any sense!"

This was in 2018, where many European banks were starting to come to terms with negative interest rates. The company itself had funding from a large investor, and one of the stated goals was that the product was planning to be rolled out in Europe by 2020 (covid happened and I'm sure scuttled much of that).

Understanding that interest rates can, in fact, be negative, doesn't require some fancy economics degree. You could just go to yahoo finance and read news headlines that it was happening.

And... we had to create entity records representing a bank - name, address, contact info, etc. Nothing crazy, but "we need to handle multiple currencies". Someone added "currency" as the field for the bank - a one-to-one only. I raised the issue that large international banks can and do deal with multiple currencies. I got some blank stares, and a "banks don't do that" from some project lead. 2 months later, demoing for one of the VPs... a VP said "these bank records should be able to have more than one currency associated". "Yes sir, we'll get right on it!". Ugh...

But hey, I'm just a developer, right? It's better to go back and tear up months of work and tests, and push back release dates, than to just listen to a concern from someone with some experience.

"For v1, we only support single currencies. Multi currency records are on the roadmap for v2 in Q3" or "We prevent negative interest rates on purpose and will notify the user that negative rates will require a call to support team" or similar responses - both would have been mildly annoying, but I felt like I was being gaslit by folks in the company, intimating that normal/basic stuff was just random stuff I was making up.


Do you understand the basic reasons why?

Developers are on the hook for bad code and complexity. Rushed code makes feature work take longer, it makes working more irritating, and creates operational work. Everyone is burned by a team that does these things poorly at some point in their career and it drains the life out of you.

They need to trust that you'll schedule time to go back and do things correctly. Clean up technical debt, nail down all the requirements, etc.; you don't want to be jumping from MVP to MVP. Maybe you do this well, I don't know. But you need to understand the motivations and incentives of the devs you work with better or you're going to be fighting them constantly.


> Rushed code makes feature work take longer

grug say nothing of "rushed code". Grug get sick if grug drinks from dirty water, even if it is closer than walking to clean water. Rushed code and dirty water hurts Grug.

But that not mean that Grug needs to dig a trench to clean water, or build wall like beaver to get more clean water, it hurts Grug's back. Grug just walk to clean water when Grug is thirsty.

Grug dig canal to clean water before, and Grug need to always keep canal clean and me not able to hunt for food because me digging new holes. One time, chief said village need to move to other side of mountain. Grug said goodbye to canal and my beaver wall. Grug should not built them.


100% A bad code base is a mental health hazard, and that shit hurts. Of course time crunch isn't the only thing that makes a bad code base. Devs bike shedding to much, or just not being good at maintaining a code base can have the same effect, but not having the time to protect your own future mental health really starts to weigh on a workplace.


Inheriting someone else's mess can really hurt.

Especially when you are not given the time to improve things. Worse when you're told they've negotiated 20% extra time to clean up, but every estimate gets shortened by more than 20% to please the customer.

Then you're in the firing line when new features don't ship in time and bugs keep popping up everywhere.

It makes you question your ability as a developer. Sapping your motivation to the point where you feel drained even thinking about the next task.


Agreed. And I've found that agile doesn't really help here either.

Sometimes I wonder how we got to be highly paid, but sometimes, completely lacking the authority to do our own jobs.


Because it's a hard job that not a lot of people can actually do, but most companies still operate under the, now outmoded, model of industrial production. Under this model, every employee's role can be modeled as a fungible cog in an assembly line. This lets management, and organizations as a whole, abstract over their production process and not deal with the particularities of any given worker.

I've got a kind of fuzzy idea that this is a similar mental bias/logical fallacy to the black swan events that Taleb talks about. People assume normal distributions, but the underlying is a Pareto or Power Law distribution. They do this because it's easy and has lots of nice and practical properties. In the case of markets/finance, it makes the math really easy to work with on a day to day basis in the 99.9% case. But in the black swan event, the .1% case, it totally falls apart and your model cannot account for it.

Much like the financier assuming normal instead of Power Law, the production of software assumes the industrial model of production. This works for some high % of cases to get some amount of productivity out of most writers of software. But at the high end of software professionals, or in a situation in which something actually new is being built to solve an unsolved problem, this breaks down.

Anyway, this is not a wholly thought out thesis and mostly rambling, but it is poking at my mind and I figured I'd write it down somewhere.


You nailed it.


> Many of the devs I've had to work with were more interested in building pristine codebases with clever abstractions and ability to scale to unnecessary numbers of users or bytes.

I've worked with my share of them, but I've also worked with my share orders who'll hack together the quickest possible kludge that meets enough of the requirements to seem to work on the surface.

Interestingly, given the choice, I'd have some of both those types on my team. I really like having the "architecture astronauts" being involved in the very early stage of greenfield projects. Their desire to push the boundaries can help ensure you start off on a great foundation. I also really like having "that guy" who can hack minified javascript and manually deploy it on prod servers to fix a critical bug _right now_ while someone else goes through the "proper process" to get it into the repo and through the CICD...

But you really really need a majority of your dev team who'll take the proper pragmatic approach, or at least who can be cajoled into doing so.


> I really like having the "architecture astronauts" being involved in the very early stage of greenfield projects. Their desire to push the boundaries can help ensure you start off on a great foundation.

I would argue the opposite, having inherited a complete shambles. Started when Mongo was peak hype, but would be far better suited to a relational database. Lots of developer time spent to run on Kubernetes, so we can scale, even though we are in no danger of a having high volume of customers in the forseeable future (it's a very specialized domain).


I've been there, so far away from the business use of the product that my solutions focused on the tech.

I'm sorry.

Now I'm right on the front lines of the business and it's eye opening. I think we need to take time to tech the domain to devs first. It's expensive and won't pay off unless the dev stays on the project for a while but it's the only way to allow the dev to understand what they're trying to do for the business.

In reflection I'm wondering if the problem is more that an external consultant is often not aligned with the business. Being directly employed helps with alignment.


The problem is that most companies don't proactively teach the business side to the devs, I know mine doesn't.


I think I have a near-pathological experience of coding these days that just so happens to make me the exact kind of developer you want:

1. I still want to produce excellent code that will deliver value, work predictably, be fast, and be robust against future grugs. I am driven to do this by forces I don't understand myself.

2. I also feel a deep dread of being stuck with a piece of code for any longer than absolutely necessary lest I end up trapped in it forever, so I want to be rid of this code and be rid of it now, which means I need to find a way to get it done and ship it so I can move on to the next urgent thing.

The result is that you will get a steady stream of good code from me with pragmatic and well-documented compromises, for the low, low cost of my sanity.


Heh, that's the first time I've heard someone describe this, but I recognise the feeling. It's like constantly rushing and wondering if you're being inefficient and if there isn't a faster way to deliver what you need.


Imagine going to a master craftsperson, and telling them you want them to give you the 80% experience for 20% of the price.

How to do that without being disrespectful?

How come programmers accept this kind of disrespect for their craft? Aren’t they supposed to be the masters, the Hattori Hanzos of program code?


Martin Guitars has a range of guitars ranging from ~$1500 to $10,000 in price.

The sound and play-ability on all of them is superb, what differentiates them is the materials and the aesthetic aspects - i.e. fancy inlays and stuff.

Similarly, I imagine you could go see a stone mason and ask for a simple brick wall for 20% the price of an ornate bas-relief facade, which would still be well constructed.

Or ask a blacksmith for a simple sword, rather than one with all sorts of shiny metalwork on the hilt.


Does the corporate luthier fancy working on the simple stuff, or the complex stuff? Neither, because they likely work a single phase of an assembly line instead of finishing an instrument all by themselves. This is where the Martin Guitars analogy breaks down: the guitars are not bespoke products. Pieces of computer software obviously are, otherwise we wouldn’t be having this discussion.

Look at any entrepreneur luthier, however, and the story is very different. All their instruments are unique, and they may work on some complex ones for a long time before they decide it is ”good enough” for their standards of quality.

In the same sense, independent masons and smiths are, of course, more aligned with software engineering than any of their corporate counterparts—if any, because these professions are rather contracting-oriented.

Maybe seeing complexity as inherently problematic is actually a coping strategy employed by software craftspersons who have struggled with corporate demands such as inhuman work allocation and deadlines, internalised those demands as ”the way business OUGHT TO be done” (as if expecting a punishment for doing otherwise), and eventually become grugs who shake their club at anyone who triggers their corporate PTSD.

I really hope the grug meme does not become reality.


I like the analogy of the entrepreneur luthier and more generally the master craftsman. I've had the experience of building software "on the assembly line" and, and also been lucky enough to be paid by the same employer to work in a way that is much closer to the "master craftsman" end of the spectrum. I much prefer the latter.

But it's only in side projects that I really feel I'm able to work as a master craftsman, without the ever-present burden to just ship a pragmatic compromise that delivers value to the business so that I can move on to the next pragmatic compromise. I wonder if this is another reason why, say, doctors working in hospitals will often also have their own private practice — sure, the extra money is nice, but the autonomy and mastery that comes with a side business might be even more important.


That's because such developers know that they won't have to maintain such code. They will probably be gone (switch to another company) in 1 or 2 years.


I think you have it backward. I'd much rather maintain pristine, clear, well-architected code than code that was shipped to a deadline, if I had a choice.


Well, there is always a deadline, isn't there? I mean, not many of us have the luxury to say, "I'll just work on my pristine, clear, well-architected code and it will be done when it's done" (except maybe in side projects)...


You can still have a deadline, just one adjusted for how long something will take if done right (aka estimate) vs how soon they want it.


You have the right insight I think. The thing about putting lots of effort into "refactoring" is that the code you refactor may actually not end up in the product at all.

I think "prototyping" is better than "refactoring". Prototyping basically means that you write your app more than once. On the 2nd round you know what needs to be there and what not. You may have come up with a much better much simpler design for the whole system when you know what needs to be there and what it needs to do.

Code-quality is important no doubt, but it is not important for code that doesn't end up in the final product.


I definitely think prototyping should be more common but the problem is I've been burned before.

"Just build a version of this thing as quickly as you can. Don't worry about performance or anything so much, the goal is to get an idea of what these features might actually feel like to use so this is just a proof-of-concept project"

Then "Oh hey we are going to give that proof-of-concept project to a client to try out and give us feed back. Don't worry they know it's just demo code at this point"....

Shortly after that though it's "The client wants these changes and you're who has to support it now. The client says it's slow, why is it slow? Client wants a new feature, add a new feature!"

And I'm left dealing with a horrid, awful, garbage piece of code because I'm the one that wrote it. I was explicitly told to write it entirely as a proof of concept and not as anything production ready which is why it's shit. This has happened to me more than once.

So until I have my PMs and Team lead tell me a thing is going to be a prototype/PoC and it turns out to actually be one at least once at this point I assume it's a lie


The key to having prototypes and PoCs work out okay is to leave some important piece of UX out entirely.

Such an approach gets you the benefits of being able to explore a problem space with code in a quick, sloppy manner, without any risk of anyone actually deploying what you wrote.


Never make prototypes easy on the eyes. Use Comic Sans liberally. That way nobody will assume it is the final application which just needs a few touches.


That's a great idea. Prototype must look like a prototype


Not even just "look like a prototype" - a prototype must be _incomplete_.

If it's not? You have a solution, so of course it's going to be deployed.

The fact that your solution is gonna fall over due to being brittle code is really unfortunate, but it was avoidable if you'd just built a prototype instead of a product.


I mean that's easier said than done in some cases. If your manager wants you to build a complete version of the functionality, just one that is very "quick and dirty" there isn't a whole lot you can do to make it not fully functional short of flat out refusing


Sure, but as noted that isn't a prototype - it's a sloppy product.

That's when it's time to have a frank discussion about whether that's really what they want (and maybe to consider whether this project is really what you want to be working on, if it's a common pattern).


Yes that easily happens


Refactoring is probably a sign of problems, unless by refactoring one means ”giving abstract structure to yet unstructured code”, i.e. factoring for the first time.

That needs to happen after feature-completeness, but before a project goes into maintenance.


Well yeah, if the code was without "problems" (using the widest definition of "problems", including stuff like "code smells"), you wouldn't have to refactor it. But such "problems" don't necessarily mean that the code was bad in the beginning, it might also mean that the requirements have changed in a way that the original code structure can only handle using hacks...


How are you hiring your devs?

The more abstract-problem/algorithmic the hiring process, the more I see this.

Ask practical coding problems instead, screen for practical devs who can find 80/20 style shortcuts.

(You probably still want some of the abstract folks too, though, to round things out.)


The difference is mostly a result of perspective, that could be fixed. The PM has the use cases and product gestalt, if this was shared and made the environment in which the devs work, their prioritizing shifts to product, and engineering concerns would be limited to more obvious ones that make sense.

But it’s human nature to protect one’s job and decision making autonomy. PMs being no exception, often underweight their roles in communication transfer and overweight them in decision taking.


> In my experience, I have to fight to keep my devs from over engineering their solutions and just get something going.

In practice tho, the grass is always greener on the other side. You either wish the developer had just hacked something to get it going, when the feature failed, or you wished they had put more effort to make it more abstract and scalable, when the feature set for scaling.

Everyone's a genius on hindsight. Getting it right is more of a mix of experience, gut feel and luck imo.


It's a balance. As an engineer, I'm constantly dealing with the consequences of a "the feature worked so we shipped" mindset. Churning out features without thinking about architecture creates increasing amounts of code "tangle" that gradually slows down feature dev. On the flip side, thinking too much about architecture slows down feature dev


PMs represent the customer. That being the case, they push hard for features and leave it to engineers to keep features coming out at predictable speed.

Engineers represent themselves. They are accountable for feature delivery speed and product stability. That being the case they push hard for hardening, automating, and maintaining.

The two have to balance each other out. PMs need to push to keep Eng focused on growing business value, otherwise the company will never make money. Eng needs to push PMs to focus on long term product delivery, otherwise customers will leave as quickly as they come, and new one's will stop showing up, no matter how pretty the UI looks.

If you had a team of engineers like you describe it would only be heaven for about 6 months in my experience. I've seen two separate teams of engineers run this way. Everything was kick-ass until it was suddenly horrible and engineers started bailing.


> I'd love to work with a dev who's happy to think about how to most quickly deliver value

I'd love to work with a PM who has the courage to engage over a detailed specification document.


Do you have enough of an understanding of the codebase to know what is over engineering and what isn't? If not, that's the problem I have with a lot of project managers. If you can't understand the codebase, how can you make judgment calls like that? It can be very hard to make the right call between over and under engineered, even if you're intimately familiar with the codebase.


Obviously it's a scale but I hope you aren't falling into the trap of only seeing the features. The developers actually have to work in the code so it might not make a difference to you if the feature is implemented cleanly or not (in the short term anyway) but it obviously does to them.


Microservices

grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too

That made me laugh. The microservices madness of the past decade is now starting to settle down to more mature understandings, but there are still a lot people biting off large testing, operational, and data transactionality/ reporting costs.

People often don't recognise beforehand the magnitude of extra complexity & associated productivity costs.

I've seen companies architecting 50 microservices with 50 separate datastores when they actually needed 5 midi services.. Then lose half their productivity trying to work with all this.


This. Premature and excessive fragmenting into unmaintainable and operationally microse5rvices is the bane of my existence. It's often done by zealots, who see microservice architecture as a matter of religious faith, not operability convenience.

Sure, some monoliths are harder to debug if there are multiple distinct packages that need to operate together if they are all in the same container. But fragmenting a simple set of services into a dozen microservices each in their own disconnected container is excessive too. If any of those fragments depend too closely on other fragments to operate (like, say, a timer service and a session renewal service that depends on the timer) it will easily fall over and be nearly impossible to tell why.

If you are going to shard up your application into microservices, be sure to split out your functions conservatively, so that like goes with like. It's easier to split stuff out more later than to try to split every little jot and tittle into its own dinky container. That way you don't end up with little containers of nothing wasting processor and I/O just for the sake of dogma.


i can't tell you how many teams I've seen spend months and months engineering some vast microservice system only to end up serving maybe 500 DAU. It could have been a single executable on a $5 VPS


Wouldn't replacing that with a single point of failure be going a bit too far in the other direction? Perhaps three $5 instances?


Up above there’s a great thread about the perverse incentives in project management roles. This is a great example of perverse incentives in development roles - resume driven development practices.


I don't know if this is universal, but in my circles "microfrontends" are now all the rage.

How do you bring up concerns with that in good faith? It's so obviously terrible that I've no idea where to begin.


Oh lawd, at our client people recently pushed that idea, too. Unfortunately, managers were all on board ("Every team will be able to release autonomously and much faster!!!11") and it became politically impossible to argue for alternatives that won't result in dependency and infrastructure hell. So here we are now, splitting off every single page of a god damn legacy web shop into a separate micro frontend, each of which will be running on a completely different stack (some SPA, some traditional SSR with jQuery), living in a separate repository and with a separate microservice in the backend. But how does one share common UI components, you ask? Well, obviously by enabling Server-Side Includes on the ingress server. Duh.


Oh no... you know what, maybe we don't deserve to cross the great firewall as a species.


we are slowly returning to iframes :)


First time I encounter the term, immediate reaction "hmm, maybe we're already kind of doing this?", thinking about mapping a dozen or so services to different HTTP endpoints/prefixes under the same root.

Then I did a search. Oh lord. One of the top results[0] is an illustrative guide on how to split out a Button component into an independent deployment... I can only hope that applying this on a Button is not supposed to be taken literally but I can also only assume that thousands of professional have now made it their calling to split out every component into its own "microfrontend"...

EDIT: Nope, looks idiomatic. [1]

[0] https://levelup.gitconnected.com/micro-frontends-step-by-ste...

[1] https://microfrontends.info/microfrontends/


GOD I hate microfrontends. Had a client that had a single, relatively simple application that they wanted as 3 separate angular apps. The reasoning was that they wanted to be able to autoscale them separately? Was unable to convince them that the gains from that would be negligible, especially for an internal tool, and that scaling the whole frontend instead of a single part of it would have virtually no downside since each app was basically a single form, but tons of upsides like less code complexity, easier data persistence, and fewer versioning issues, faster build time. But they didn't care. To this day I don't know what manager was sold so firmly on building 3 apps when one would do, but not my problem anymore.


Could you have done "the monorepo approach" and just have the same code deployed in 3 instances with different `APP_MODE` configuration value or something like that? :P


Possibly but that client is actually what made me quit the firm so I don't even know where that project ended up (probably in the trash). I quit not just because of their weird code requirements but also because I had a 5am standup every day because their engineering team was in the IST timezone, and also had to drive to Palo Alto (about 2 hours in the morning) every day because even though the rest of their team was in Bangalore, they had a requirement that all external contractors only work on prem. The firm I worked for couldn't or wouldn't reassign me and I just so happened to get an email from a company I really wanted to work for. So I reassigned myself.


I must run in different circles because I had to Google it. Probably the only thing that made me laugh audibly today.


Sounds like a big brained term for web pages. I’d lean into that.


MicroFrontEnds (MFE) solve an actual problem, but it becomes a problem if developers do it for the sake of RDD (Resume Driven Development) or just because.

My Company has a Cloud platform which is kinda like a marketplace and users can install and uninstall apps/services. In our case MFEs are a perfect fit.


I have yet to see such a scenario where either web components or iframes would not have been a better fit.


iframes also work pretty well for this kinda thing :D


microfront ends work well for self contained components that need to be used in multiple pages, they don't even need to have a "separate" backend, as long as there is an area or code file for the backend code that deals with whatever data the microfront end needs.


In such a component-oriented circumstance, what is the motivation for choosing a “microfrontend” approach instead of, say, a component?


The longer you are a developer, the more you see fads a cylic just like anything else in this world. rebranding happens every 5 to 10 years for most things.


2008: Website with jQuery

2022: Server side rendering using island-based architecture for client-side rehydration of interactive components


What kind of component are you talking about? Could you give an example, because as a backend dev, I have no idea what sort of thing would require that level of separation.


Please stop.


>The microservices madness of the past decade is now starting to settle down

Meanwhile, I feel it has just started outside SV. Is the remainder of the world just perpetually 5-10 years behind?


Yes.

It's why keeping up with the latest on HN isn't essential for most of us - we've got 5 years to see if it's worth adopting :)


Depends what you mean by "the remainder of the world". Most mid-tier companies I've worked at or know people who work at in London started moving on from microservices some time ago. So I guess, unsurprisingly, no.


> Most mid-tier companies I've worked at or know people who work at in London started moving on from microservices some time ago

Are we talking about the same London? The market is still flooded with engineering playgrounds and microservices are still considered a perk and bragging point by many companies.

Finding a company with a sane tech stack that doesn't do complexity for the sake of complexity is impossible. It makes sense though - those companies already have their developers, they are happy and the company doesn't need to hire more.


I actually disagree. I think people disparaging micro services are working at companies or on problems that are too small for that solution


I disparage them, because most companies including my own don't need them. My team are keen, but I can only see it adding complexity. If your company genuinely operates at that scale go for it. But remember that Stack overflow runs on a pretty classical architecture.

https://stackexchange.com/performance


Agreed. My company has a (relatively) mature MFE platform, and it rocks. The MFE team handles builds, deployment, and can build features like experimentation into the platform, so the teams can focus on the MFEs. The MFEs are not arbitrarily small - they are related to specific experiences. A single MFE might be a collection of pages in a flow, for example. It means less work for devs, more consistency across a big company, and much safer deployments - the worst thing that can happen if you ship a bug is to tank your MFE because they're all isolated. At a big company, being able to change something small on your MFE and push to prod without worrying about every other page in the site is awesome.


Also my biggest worry when a client wants to transition to microservices what the state of their current monolith is. An out-of-control monolith doesn't inspire confidence that they will be much better at building a distributed system.


> test shaman have good point on importance of test, even if test shaman often sometimes not complete useful feature in life and talk only about test all time, deserve of club but heart in right place

> also, test shaman often talk unit test very much, but grug not find so useful. grug experience that ideal tests are not unit test or either end-to-end test, but in-between test

amazing


These literally just get better and better.

> big brain type system shaman often say type correctness main point type system, but grug note big brain type system shaman not often ship code. grug suppose code never shipped is correct, in some sense, but not really what grug mean when say correct!

grug need start shiney rock deposit ritual for grug similar program shaman so grug has more club. maybe some club have spike or some club have more big size. me no idea what grug need but me know grug need more.

me is mere artoo not big brain like grug but me has big sympathy for grug. like grug test shaman make artoo skeptical. like grug artoo wish spread word of grug. grug too is shaman. artoo most good disciple grug.


More

> grug warn closures like salt, type systems and generics: small amount go long way, but easy spoil things too much use give heart attack


Definitely saving that, translating it to interview-compatible speak, and springing it on future employers.

Actually I need to translate a good 75% of grug's teachings to interview compatible language...


Grug is my spirit animal. I want to meet Grug.


> grug no able see complexity demon, but grug sense its presence in code base

This is the key problem with complexity.

Complexity is fine if you understand it! It's when you're aware that something is complex, but you start to get these mental force-fields pushing you aware from the scary parts, that it becomes a problem.

> demon complexity spirit mocking him make change here break unrelated thing there what!?!

That's the happy case! The sad case is when you make a change and don't observe any obvious breakage, but then 3 years later you realise something you somewhat care about has been silently broken for a very long time.


>but then 3 years later you realise something you somewhat care about has been silently broken for a very long time

The trick is to never stay somewhere long enough to feel the consequences of your bad decisions


Your reply made me smile, but it's the opposite that has given me my biggest learning experiences. Greenfield projects that you stay on a while. You only have yourself to blame when there are problems a few years in.


> grug understand all programmer platonists at some level wish music of spheres perfection in code. but danger is here, world is ugly and gronky many times and code so also must. humility not often come big brained or think big brained easily or grug even, but grug often find "oh, grug no like look of this, grug fix" lead many hours pain grug and no better or system worse even. grug early on often charge into code base waving club wildly and smash up everything, learn not good

> grug not say no improve system ever, quite foolish, but recommend take time understand system first especially bigger system is and respected code working today even if not perfect

grug read this and grok spirit, grug enlightened. master not club grug so much.


Grug state Chesterton's fence very easy understand way.


I think this nicely captures everything I've learned about programming over the past n years that I wish other people would realize too.


What have been the most complex business requirements for a project you have worked on, and what made them complex? Was the resulting system less complex than the requirements?


Sorry to hijack without being asked. I totally agree with the original piece and this comment. For me the most important part is: no matter how complex your requirements are, how you deal with complexity and abstraction in your codebase and architecture, will make or break it. Abstractions should match what's actually happening (with just the minimal right amount of future telling). Useless abstractions and missing abstractions are both bad. Getting rid of a bad abstraction in a system is harder than introducing one, though. Erring on the side of fewer abstractions and dumber (not bad, just boring) code especially in early stages can save you literally thousands of hours of debugging, fixing, scratching your head. I may be wrong, this is how I feel right now.


grug think two is company but three is crowd. no refactoring new abstraction until same code in three places. sometimes crowd is fine too, if divided nicely. grug never saw point of new class with only fetchAndParseMany<T> function.

also why grog keep putting state in mutable instance properties on acting class when can just be parameter arguments? save so little typing but will for get curse. grug remembers making big object-oriented python2 app multi-threaded. and all that c# when in consulting on legacy codebase. grug avoid such pattern now.



> also danger abstraction too high, and big brain code become astral projection of platonic generic turing model of computation into code base. grug confused and agree some level very elegant but also very hard do anything like record number of club inventory for Grug Inc. task at hand

My favourite part.


> good debugger worth weight in shiney rocks, in fact also more: when faced with bug grug would often trade all shiney rock and perhaps few children for good debugger and anyway debugger no weigh anything far as grug can tell

grug know debugger good but grug often realize grug no need debugger on smaller cases and only run it when grug need it, grug try simple code like print and log first, if grug sad and no understand grug go for debugger


Relying on log and print statements is like giving up. I would claim that's not simplicity, that's inexperience, but I have no idea what language you're referring to. Sometimes I do it with JavaScript when I didn't setup the project, it's using a framework I don't know and I'm not willing to spend the time figuring out how to get real debugging working so there are caveats.

But you definitely should not be doing that if you have a good debugger as It's faster to click the line to add a break point and press play. You can SEE the state. And you can view other variables at the same time if something's looking whiffy, usually by just hovering your mouse. Plus see the entire call stack.

The thing that's boggling my mind about this is that if you know the line to add a log statement on, you know the line to add a breakpoint on. It's so much easier to just add a breakpoint.

In some languages if I saw someone adding a log statement while debugging I would immediately classify them as a junior programmer and start teaching them how to debug properly.

Either you are using a shitty language with a crap debugger or you need to learn how to use your IDE.


Yes, I am using a shitty language.

> Thus, I fully support high-level languages in which pointers are hidden and types are strong and the declaration of data structures does not require you to solve a syntactical puzzle generated by a malevolent extraterrestrial species. That being said, if you find yourself drinking a martini and writing programs in garbage-collected, object-oriented Esperanto, be aware that the only reason that the Esperanto runtime works is because there are systems people who have exchanged any hope of losing their virginity for the exciting opportunity to think about hex numbers and their relationships with the operating system, the hardware, and ancient blood rituals that Bjarne Stroustrup performed at Stonehenge.

https://www.usenix.org/system/files/1311_05-08_mickens.pdf


Some debuggers take longer(1-3 secs) to spin up than it takes me to add a print, compile and run and see my theory confirmed or not.

If I want a debugger and see e.g. the complete variable scope at a breakpoint, I'll use a debugger. If I just want a very quick sanity check I'll use a simple print.

Once you enter debugger land you're there. When I'm currently not in the debugger but rather compiling and executing "for real", it seems more straightforward to me to not enter debugger land when I really don't need it. Personal preference.

There are scenarios, where 2 print statements can tell you more quicker, than breakpoints. You have to step through the breakpoints, right? 2 print statements in different parts of the control flow can tell me everything I need to know at a single glance.


> You have to step through the breakpoints, right?

This seems to be a common misunderstanding, but debuggers generally also support logpoints (print statements that you can add/remove on the fly without having to close, compile and restart the application).


Point taken, I might have to look at a different debugger supporting loglines for my current setup. Still, the time it takes to execute would be a concern for me. But I am going to check out other debuggers.. hope they're snappy.

I do like to step through code with a debugger when I need it. But that's rarely the case. The usual bug-hunt is something like: Do we go down path A or path B and whats the value of C before and after. Ideally, there is already logging in place that gives me exactly the info I need. If not, to me adding 2 prints or expanding the log usually seems just way more sane than spinning up a debugger, when I'm already looking at stdout/logs and I just need a tiny bit of additional info in there. Maybe I need a faster machine, lol.

Tracing is also a great tool, imho.


If I set a breakpoint somewhere and it ended up being a location that was useful, that's usually a good place for a log statement.

As for your point about logging being a fail condition, I was working on a distributed system where I had no control over which instance my code was running on. I would attach a debugger and make the same request a dozen times before the instance I had attached the debugger to processed the request. This wasn't a system I could setup a local instance of. I also couldn't reduce the instances to a single one because there were other devs, testers, data engineers working on it and the system did raster processing that regulary took 1-5 minutes. I resorted to log debugging.


Yes logging is good in many cases. It means you can observe the whole execution of your program by simply reading the whole log. Whereas when you debug you can only debug a selected set of branches. I do both.


In real-time systems (games, specifically) logging can be very useful for figuring out what happened on the frame _before_ the bug happened, which in a lot of cases is not immediately obvious where the bug manifests (and you'd hit a breakpoint).

The alternative is setting (sometimes many) breakpoints in different places and remembering what happened at each of them. Personally, I prefer reading a log.


Time travel debugging addresses this, but maybe would be too slow or memory intensive for games.


It totally depends on the case at hand!

I use debugger all the time when I run into pointer related issue, or some checking some tensors in deep neural nets etc.

In some cases, I throw debugger just to see what is going on.

However, I have had few cases where debugger slowed me down. If you are doing something in graphics that requires you debugging issues that spans multiple frames, sometimes it's easier to notice the value over a period of time and see why things are behaving that way. From there you can reason what might be causing the issue. It can be done frame per frame inserting multiple breakpoints, recording them and viewing them accordingly! However, I prefer simple logs in such cases.

I have used both approaches as time demanded.


You can't run a system handling real money or real medical records in a debugger. Or if you are, you're violating a bunch compliance standards and will get sued into oblivion.


You are not able to, or at least should not be able to debug your code in production. More than likely all you will have is your logs.


30+ years professional developer here. Tell me how you would debug a customer issue that happened yesterday on their system without logging and without being able to connect to their system? Manually using a debugger is a red flag for me. It tells me that the developer doesn’t understand how to automate debugging by automatically detecting invalid application states and logging the problem. The best developers I have ever worked with never use debuggers. The worst developers I have ever worked with used debuggers all the time. Because they had so little understanding of the code that they needed to manually step through the code to get it.


I think in a way you are both right. This argument goes both ways. I’ve seen developers have so little understanding of the code that they needed to add logs on every second line of it. I’m talking about stateless pure functions with clear inputs and outputs, here you should only have to log the input, possibly also the output, the rest is reproducible. Some would call this tracing, not logging.

The best systems are the ones that log just enough, but no more, whatever that means can be difficult to quantify. I find logs that are informative or actionable to an operator are the most useful, if a log entry appears more than once a minute and this entry is only useful for a developer of a particular submodule, it’s too chatty and likely an indication this module is not understood enough. If a system is doing its job it should for most of the time not say much. Getting to this state of log nirvana does however, as you say, require the best of developers.


Agree. It is how and why you use a tool that matters. Not the tool itself. So judging people based on the tool they use is probably wrong. The fact that I have only seen bad developers use debuggers might just be a coincident.


> In some languages if I saw someone adding a log statement while debugging I would immediately classify them as a junior programmer and start teaching them how to debug properly.

Ah, the awkward "tween" stage of development. If you don't switch to the management track, you'll skin your knees enough times to learn when you need one and not the other.


> Either you are using a shitty language with a crap debugger or you need to learn how to use your IDE.

or you are working in a domain where debug builds are intolerably slow.


For games you often use pragmas to selectively disable optimization. It depends if you will be debugging in a few key places or not whether that is useful, but if you are manually placing logs I assume so.


On the frontend I have a sweet debugger setup with neovim and chrome but there’s definitely a time investment setting it up. The overhead exists almost entirely because of how the code goes through a Typescript, Vue/Vite transpiler and needs to sync with the source map… so breakpoints aren’t a 1-to-1 connection.

So yeah console.logs are still quite common even if you have a great debugger because it’s the most accessible and laziest option.

But there’s something very rewarding about getting really good with a debugger and figuring things out quickly.


Printf bad, log good. Grug already have logging in app. Grug use logging, Grug no reinvent logging with printf.


Except when the complexity demon creeps into the logger. That was a thing quite recently.


Really depends on what you are coding. I have my debugger set up, it's easier then trying top spot the new print statement in amongst all the other log statements being spat out.


Now this is a good post. Especially the part about the frontend Complexity Demon having a grip on the entire industry, and the Fear Of Looking Dumb. It goes hand in hand.


> frontend Complexity Demon having a grip on the entire industry

Is frontend web development more complex than it needs to be? If so, how?


Incredibly so. Our frontend devs gave us a talk on React (mainly for backend devs, who know enough frontend to fix bugs). Diagrams with arrows here there and everywhere. NPM install downloads a significant chunk of the internet just to display a hello world page. Compared to what I was doing in Django with server side templates and a bit of JQuery a few days ago this is incredible. It takes twice the works, for very limited benefit in our case at least. Most web apps are just displaying some data and a bit of form validation.


That's just React, which both has the highest market share and a culture of following the latest fad combined with piecing together applications from multiple tiny modules(because "it's a library not a framework").

Also jQuery becomes slow and unmaintainable very fast, especially considering that modern frameworks do a lot of tricks to increase performance, like detecting a row swap in a list.

Next gen frameworks like Svelte and SolidJS managed to avoid some of that because they compile to a minimal, Vanilla JS app which is easy to read. Especially Svelte at least gives you a decent stack trace.


Angular is at least as bad. The whole thing is a mess. I miss the days of server side rendering and JQuery.


You can have that today. In fact, 90%+ of the web is exactly this.

I don't miss it though. Especially enterprise apps based on this model are just hell to maintain.


npm, babel, webpack, module compatibility, et al


You are free to develop everything in vanilla JavaScript and run your coded scripts straight in the browser. Very non-complex.

But the team that uses npm, babel, webpack etc will crush you both on development speed and stability.


Webpack is by far the worst tool I've ever used in my development career so far. I never want to see it again, yet using it has allowed me to do things that otherwise would have been a major problem. Something can be useful, but still hideous beyond belief in terms of complexity.


I feel similarly after diving into Terraform for the first time


Speed maybe. But stability? Hard to imagine how that stack of build tools is going to provide more stability.


The obvious ones: Typescript + eslint + jest + Selenium. More complexity, high return on investment.

The non obvious ones: Without babel you are either writing legacy JavaScript (which is arguably not as clean and easy to read), or users will complain your site is not working on their older browser.


How old is legacy? I seem to do ok using most ES6 features. I still am not a fan of javascript, but the newer features don't do much. It's not really a readability thing, but the one JS feature I'm holding my breath for is native decimals. I could really use that.


Modularization, encapsulation, coordination, safely scaling contributors/features.


Vanilla js can do all that.

Just `import` and done. No build steps.


As a front end dev I am all-for vanilla JS. I hate maintaining a webpack, postcss, dependency hell. I still long for the simpler days of plugging javascript into a static page for some helpful dynamic utility.

Where I find vanilla JS struggles is (for example) rendering a big tree of data, and then needing to update some data dynamically within the tree without re-rendering the whole thing. You end up with some horrible queryselector hell, or keeping some immense table of pointers to the elements. Fortunately for us, we have some tiny libraries like lit-html that can help accomplish this. In the theme of grug I think the ideal solution is somewhere in between.


“Can do” doesn’t always (or even often) mean “can do well” or “can do well according to our needs” or “works for all our use cases” or “works for all our users” or…

If vanilla JS checks off all those boxes for you, that’s genuinely fantastic and I’m happy for you. And a lil envious tbh. But there are a myriad reasons I/we currently can’t justify ditching the build toolchain, and most of them relate to scaling in a way that fulfills our requirements. I can’t imagine I’m alone in that.


IE users are now complaining that your site doesn't work.

Like I said, stability.


I wouldn't call that stability. You might. I have the luxury of never caring about problems like that. Most people can probably achieve it now that IE is officially super-dead or whatever.


Yeah it's not like the complexity doesn't come with some wins. The tradeoff has some serious negatives but there are good reasons those tools are used so widely.


let's be real. nobody is crushing anybody in development speed with webpack. Use rollup esbuild or vite


i think the crux of the matter is to use the right tool for the job once you fought hard not to use that tool. Too many devs see something shiny and want to incorporate it into their workflow somehow, this is what introduces bad complexity, instead you should try very hard not to use something shiny until you realize overwhelmingly that using it would be a net win.


and this is why I have to use nitter or invidious to avoid the wrath of the complexity demon


Hahahaha that’s a good one. The quality of the end product depends 99.99% on the quality of the developers. Not the tools they use.


The whole CJS/AMD/UMD/ESM nightmare has included some prominent devs intentionally breaking packages for certain build types because 'ESM has been around for a while, why haven't you updated yet?'


Hehe, good one!

Oh, you're serious? :D


If it's so bad, you're welcome to write your applications in vanilla JavaScript, with no shims, in flat files without modules calling 1998 era DOM APIs.

There's a reason most people don't, however.


developer see javascript think "looks frozen yogurt came out wrong end, but can do job". end up building castle of wrong end frozen yogurt.

some say better than no castle. grug say why building with no-no soft serve in first place. (grug know answer: more shiny rock)


Please stop the grugposting, it is impossible to read.


you in thread for grug story! no like, no read!


Made me smile, have my upvote


always be grugposting


Meh, grug just <script src="/js/mithril.js"></script> and ES6 at past job. Way less headache not needing build crap, just one blub maven Spring repository. Big ball of whatever may be seen as bad practice, but is honestly no worse than "best practices". Maybe not best for resume oriented development. Guess I had the luxury of only needing to support one browser.


Vanilla js has modules though. You don't need a build step to use modern APIs.


Average js zealot.


As funny as this post is to read, I don't want to see yet another developer say "complexity bad". I want to see a company deliver high-quality products with very few bugs at a fast cadence, and continue to make major changes long into the future without slowing down.

_THEN_ I want developers from that company to share their opinions about how they do it. Do such companies/products even exist? Software is so bad these days that maybe no existing software lives up to our ideals.


> I don't want to see yet another developer say "complexity bad".

> I want to see a company deliver high-quality products with very few bugs at a fast cadence, and continue to make major changes long into the future without slowing down.

you want eat cake and have cake.


...and now you have 2 cakes.


idea start with one cake. eat only cake mean no cake left :(


> _THEN_ I want developers from that company to share their opinions about how they do it.

Beware of survivorship bias.

For every successful company using a technique, there might be 10 others using the same technique but running into the ground.

And not sharing the embarassing failure.

Microservices strike me as a pertinent example. It might fit huge companies very well, but not if your customer base might as well be served by a cheap Raspberry Pi.


I wonder if the key word in there is "company." Modern companies tend to be VC funded monstrosities that onboard a lot of programmers to do things to make VC firms find more seed capital.

This selects for complexity, not finished product.

We've seen a lot of one-man or small-team companies do some pretty amazing stuff, because usually they are solving a problem that they have. If you're high on VC cash (or are trying to get high on VC cash), the problems you are trying to solve is more marketing than technical.


Thanks, I needed to hear that, be reminded of it.


Yes, it's called RAD game tools, recently aqui-hired by Epic. If you're not familiar with RAD, they're some of the best programmers in the game tools industry.

Jeff Robers and Casey Muratory used to do a podcast called the Jeff and Casey show where they'd occationally discuss software products shipped by RAD.

One such episode details the maintenance of a garbage collector RAD shipped, which Jeff denounces as way too complicated and (IIRC) not worth the developer time or CPU cycles.

Complexity bad.


> I want to see a company deliver high-quality products with very few bugs at a fast cadence, and continue to make major changes long into the future without slowing down.

I think maybe Reaper fits the bill. http://reaper.fm/

As far as I can tell, it's two people.


few people touching the code is easy mode IMO. much easier to not introduce bugs if you've designed half the system and are deeply familiar with its various idiosyncrasies. good architecture is not a must here, although fewer people obviously also make it easier to stay consistent and make larger refactors at once without stepping on anyones toes.

sometimes that bug free thing that just works and was written all by one guy is just a real pig behind the scenes, but hey, that guy always knew what not to do, no matter how messy the APIs.


Yeah but one of those people is the winamp guy. His cape is easily two extra people woven into a simple cloth fabric.


You mean in the form of books, tech talks, articles and so on? There's plenty of that. But it is usually very technical or about abstract problem solving, sometimes also historical/anecdotal. Things like that.

Methodology stuff sometimes comes from people who don't seem to code much at all, or haven't for a long time. I don't really read those things often and when I do I tend to skim them out of boredom.

Software design, architecture and paradigms are a mixed bag. There is plenty of distracting stuff there, and much is from people who again, don't code. But there are a bunch of rare gems in this area, even coming from academics/teachers or veterans who haven't been in the trenches for quite a while, but have plenty of useful things to say.


Doesn't Apple "deliver high-quality products with very few bugs at a fast cadence, and continue to make major changes long into the future without slowing down" ? I'm not sure how they do it; but I like their products. It seems like they do it by saying no to a lot of complexity; while embracing necessary complexity.


Complexity doesn't necessarily slow down feature development. In my experience it reduces the project longevity. At some point it will be too big and complex to rewrite all the while more glaring problems emerge and cannot be dealt with.


If I had a nickel for every time I was hired to rebuild an app from the ground up because it became too complex for anyone to work on, I'd be able to keep up with inflation.



The advice is fantastic! Fear the club.

> grug tempted to reach for club and yell "big brain no maintain code! big brain move on next architecture committee leave code for grug deal with!"

> grug tempted reach for club when too much agile talk happen but always stay calm


Grug and I have essentially identical software development philosophies. I appreciate you Grug. This, especially, was where I felt kindred spirits colliding.

> type systems most value when grug hit dot on keyboard and list of things grug can do pop up magic


For me, that's 50% of it. The other 50% would be:

> type systems other most value when grug make wrong, but no user see because big red arrow point to first.


The other big one is I think Rust style enums (especially to enable null-safe types) are amazing. Lots of real world problems are solved well by really darn good enums.


grug hate magic number, magic string


Rust enums eliminate those (unlike C enums)


This is true and excellent, and I'd put it next to Hickey's "Simple made easy" and Metz' "The Wrong Abstraction" as something I wish every developer would grok.


I would recommend those to anyone here that hasn't seen them.

I think this requires a big shift in community thinking. Programmers are trained in universities to throw OOP principles at every problem, and the DRY principle has deeply taken hold. Obviously both have their place, but they often used overzealously and come at the expense of readable and maintainable code.


In "Simple Made Easy" he's recommending XML, JSON and even SQL for data. So I'm going to have go to with "No."

All that does is move somewhat messy language constructs into somewhat messy data constructs.

The data is static-ish, which is nice. But the associated code has exploded in complexity and with XML particularly you can't even be sure if it's truly compliant - especially if it's coming from an outside source.

So I think that whole talk is spectacularly missing the point. The idea is good but functional/read-only doesn't solve the problem.

Because some problems are just hard, and no amount of functional twiddling makes the hardness go away.

Names are hard. International localisation is hard. Dates and calendars are hard. International holidays and special dates are hard. Addresses are hard. Timezones and time systems are hard. Phone numbers are hard.

There is no trivial solution to any of these problems.

The most effective solution would be an international standard collection of algorithms and APIs baked into identical standard libraries for each mainstream language.

What you get instead is a lot of people wasting time and energy solving these problems over and over - badly.

And in fact XML, JSON, SQL, etc are similar. There are countless libraries and frameworks for managing these, and they're all slightly different and likely to fail in different ways - not just in different languages, but in multiple competing frameworks for the same language.

The problem isn't the code, it's the culture around it. Instead of solving a problem completely and standardising, the industry is addicted to nearly-but-not-quite solving the same problems over and over.

That's where a lot of unnecessary complexity comes from.


> even SQL for data

I am lacking context, but SQL was made for data.


2 more point grug want make.

1. big brain write code when no idea big picture. later problem make grug have to rewrite many code, make grug want break keyboard on desk! (no idea where club, maybe left bathroom stall)

grug now make diagram first, tell purpose code. maybe take long time and grug no happy. but diagram tell many story from tiny picture. all see picture, all grok. junior grug see problem, ask question, all work together solve problem. many pat on back for junior grug, maybe give bonus shiny rock. then grug make code. less waste time throw away code not make sense, more grugs make code from picture same time. think of artist make mural with grid. grug happy when think code like art.

2. grug also told to refactor sometime, not happy. executive vp say refactor whole app, grug raise club very high. grug boss say "Ok, we'll do Embrace, Extend, Extinguish", grug lower club a little.

grug put up app facade, make feature flag and new feature, push feature to prod, test new feature with little real traffic using flag. test more and more traffic. when new feature use 100% traffic in prod, then remove old code and flag. do again until all old code replace. refactor slow but not make grug work overtime from bad deadline crunch, waste less money when refactor no work, less smash boss when no stand up to executive vp.


Even big brain developer is grug brain when read code later.


Big brain no read later code. Big brain go to conference.


Grug brain see grug code later. Grug brain say to grug: this code not grug code. Grug scratch head, kill favorite flea. Grug must grok old grug code but get no shiny rock for grok own old grug code. Grug sad, say to grug, next time make grug code more good. Grug boss say: no time, grug do as grug boss say. Grug lift rock, want tell grug boss stuff it. Grug not tell. Grug frustrated. Grug ask more snacks in work cave. Grug ask Aeron butt hammock.


Big brain developer go to new tribe to paint new cave for shiny rock, leave old cave with grug


> grug very like type systems make programming easier. for grug, type systems most value when grug hit dot on keyboard and list of things grug can do pop up magic. this 90% of value of type system or more to grug

This resonates with me


This reads a lot like like commit comments, status reports, e-mails and tickets in a company in which everyone is from a different country from around the globe.

If you can't read grug English, you will find it hard to navigate in the global workforce.


I have accidentally introduced “dope” as a thankful affirmative into our international developers lexicon at work. There’s another even more amusing one too but I can’t remember what it is. Working with global teams is fun!


Thankfully this is not totally correct. I personally found grug English to be harder to understand/read as a non-native English reader.


As a grug-brained developer, I love the spirit of this, but it's kinda hard to read. Maybe the author can publish an English translation? :)


grug think grugposting is part of the charm


Grug thinks there would be middle course keeping most of charm but less exhausting to read.


Exhausting make grug read more slowly and parse well.



This reminds me rather of a cross between "Code Monkey" (Jonathan Coulton song) and Film Crit Hulk Smash.


BE THANKFUL GRUG NO TYPE IN ALL CAPS LIKE FILM CRIT HULK, FILM CRIT HULK HAVE MANY THOUGHTFUL POINTS TO RAISE ABOUT FILMS SND STORIES BUT ALSO READING ALL CAPS FOR MORE THAN ABOUT ONE PARAGRAPH IS OUCHY

(Film Crit Hulk released a book; half of it was in all caps, the other half was the exact same text except in normal sentence case.)


On the other hand, I'd be interested in seeing a 'ten hundred most common words' version of it that dumps those complicated words like 'refactor'.


I might be willing to give this a shot. I appreciated the charm and humor in the original, but especially for readers who don't have English as a first language it probably gets in the way of the message.


Here are the first few sections: https://gist.github.com/raphlinus/f6b34f85d63c4b73a0242481ed...

I'll keep going if there's positive feedback, delete it if the author doesn't appreciate it.


This is so much easier to read, thank you


I like it. It still carries a little bit of whimsy and puts a smile on my face (while nodding).


> unfortunately also many test shamans exist. some test shaman make test idol, demand things like "first test" before grug even write code or have any idea what grug doing domain!

I am keeping that.


> Microservices

grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too

seem very confusing to grug

>

Yes.


This is perhaps the most succinct way I've ever seen this expressed. Agree 100%.


Love this. Always keep it simple, because future me is dumb. I will say though that frontend dev gets a bad rap, and the tooling is complex because the problem is. UI's are incredibly stateful by nature, and you're not going to get anywhere unless you use a mature framework that lets you reason about it effectively. No, I'm not talking about using React to make a static site, but anything with a modicum of complexity gets out of hand pretty quick.


> oh well, grug end up at good company anyway and rob pike dress habit increasingly erratic, so all work out in end, but point stand: logging very important!

beautiful...well done....


Horseshoe theory right once again: Grug and 100x’er megaminds are aligned, and the midwits are over complicating and underperforming.


Grug feel related, make grub happy. And not know why but grug notice that grug style make easier leave club resting. Grug think practice talk grug to inside grug, make club battle few.

Also, grug not know if big brained or grug. Grug think grug but not see big brained. Big brained think big brained stop big brain and become grug. When stop think grug big brain, big brain grug return. Hard, and bore. Life such.

Now sleep, soon shiny rocks collect.


I have been thinking about the complexity bias that affects our perception of quality. I think as programmers it is our natural assumption that if something is complex, lots of thought must have gone into it beyond our understanding. This is especially true when starting a new job. But this is slightly ironic because often more code makes something slow, which isn't a good thing at all.


code can be thought of explaining to a computer what do you want it to do.

In general, explaining without managing complexity is easy and makes you look smart. However, explaining so anyone can understand is very hard. Grug author good at this


Start calling bad abstraction "indirection". If I'm debugging something, I don't want to be chasing a rabbit through figuring out what calls what, and what constant is stored where, what pattern created what instance, etc...


I hated dependency injection until I escaped Javaland and discovered that you could do it with direct interface references instead of four layers of indirection.


> grug brain developer try collect learns into small, easily digestible and funny page, not only for you, the young grug, but also for him because as grug brain developer get older he forget important things, like what had for breakfast or if put pants on

grug relate. other day grug forget Angular have pipes even though grug use async pipe in same PR.

This is particularly concerning for me because not being too strong in the klaka klaka klaka 600LoC daily department I always relied heavily on my knowledge. Is it a coincidence that this started happening after exactly ten years of commercial experience?


grug brain like pokemon brain: have to forget thing to learn new thing. sometimes old thing useless, sometime old thing very important like bank password


grug brain tip: password manager. grug only remember one password. even on other computer, grug look up password on phone

grug brain tip 2: use grug-brain-compatible password generation algorithm. Grug only need remember algorithm

Example: f("hacker news") = "GRUG<3CLUB" + "hand" + "neck"

grug invent first part with number and symbol one time. rest change depending on site


What a fun article!

When reading it, I felt like it was loosely inspired by A Philosophy of Software Design by Ousterhout. And it was! Near the end it is listed as recommended reading. Cannot recommend it enough.


"deserve of club but heart in right place"

I love it.


>grug brain developer not very smart, but grug brain developer program many long year and learn some things although mostly still confused

grug admit confusion is puzzle. grug blame complexity for confusion. grug make better relation to grug's target. grug reduce complexity and confusion. grug win woman and shiny rock.


big brain really like post. lesson learn is deep!

"(best grug brain able to herd multiple big brain in right direction and produce many complexity demon trap crystals, large shiney rock pile!)"


Now we add GRUG to KISS and YAGNI.


Developers talk about reducing/avoiding complexity all the time. And yet the very same developers keep making things more complex. I am starting to realize that most developers don't understand what complexity is. So they say crazy things like "microservices are less complex than monoliths". And they actually believe it! When I then step by step show them that microservices are actually objectively more complex than a well designed monolith they agree with every step of my explanation but somehow still don't get it. Very strange.


How long until you see the first 'grug-brain' t-shirt at your workplace?


Never maybe. People will identify its a modified version of 4chan wojak and cancel Grug.


More likely, the creator might recognize that a lot of people who like this kind of stuff have a habit of "hiding their power level" by not buying it and displaying it on their chest. Going ahead and buying that shirt is something like what we used to call the "Silk Screen Goku Shirt" phenomenon, but the newer generation tends to refer to it as "Ahegao hoodie" Syndrome.


small (optionally 3d printed) caveman on desk, with removable club for small bonks


This is like if Code Monkey[1] wrote a whole manifesto.

[1] https://www.youtube.com/watch?v=v4Wy7gRGgeA



this is art and all of us should be ashamed.


This is wonderful. I feel myself thoroughly agreeing (but also somehow intensely disagreeing at the same time) with things that are written, especially when it comes to type systems. At some sort of deep level, I seem to like the idea that stuff is proven to work, regardless of how complex the type system overhead is. Maybe it's just two sides of the same coin. Some sort of primitive hatred of complexity and a primitive desire for safety and control will forever be at war inside me.


This might be the best text on programming I've read since Why's Poignant Guide


this might be the highest complement ever received in the history of hacker news, thank you


Australians of a certain age might have a different (yet not that different) take on the Grug brain: https://www.simonandschuster.ca/books/Grug/Ted-Prior/9780731...


Hah! I read the whole thing thinking that of that grug. Which gave it a really odd but lovable vibe in my head. TIL there is more than one Grug.


this grug read article, but author grug big brain, article too long. If article was smaller with less complexity, easier for regular grug to read and remember.

</grug>


grug read one sentence, grug feel smarter already. grug read next sentence, grug feel smart on different subject. grug hope have one sentence in inbox every day. or maybe little bird chirp sentence at grug. but no complex, just many subject.


> grug has never used erlang, hear good things, but language look wierd to grug sorry

maybe grug try drink elixir?


The writing style has the same cadence as the speech of “Uncle Roger” of egg-fried rice fame: https://youtube.com/c/mrnigelng


>> over time grug learn this hard debug, learn prefer write like so:...

I thought grug would say, "grug used to use debugger but now grug stare at code until grug understand code - debugger bad brain drug for grug".


Nice article but a bit tedious to read improper English. It is fun for paragraph or two but wears out rather quickly. If you got something important to say why not just say it as clearly as possible?


grugspeak imply that message self-evident and that complexity is not really in world but invented by badspeakpeople

but world is complex, and babytalk is denial


I am groot


Grug no talk of estimation spirit demon.


You've heard of Film Crit Hulk? Here's Software Dev Hulk.


Can we get this idea as popular as Joel's rockstar developer? See it in job ads? I think I would prefer to work with guys like this.


Grug no read good but grug fink is good advice here


This bats 0.950+

Who is this person? They must be a very senior Jedi


> grug not like big complex front end libraries everyone use

> grug make htmx and hyperscript to avoid

https://twitter.com/htmx_org/status/1539713592794169346

https://github.com/bigskysoftware


His name is grug.


His name is Robert Grugson


This is a long winded way of saying we don't know how to write software, so we opt for 'ha ha, funny' instead.


> sad but true: learn "yes" then learn blame other grugs when fail, ideal career advice

This is a big part of the general career skills I teach interns when I'm mentoring. Learn how to not lose the game of musical chairs. When a project is clearly going to immanently collapse under its own weight, change teams before it happens.


For your enjoyment, here is a part of the article being read in a Yoda voice by a pretty good speech synthesizer:

https://jonaquino.blogspot.com/2022/06/grug-brained-develope...


Well meaning..

However, what I _REALLY_ want is a sw-dev (let's just say it like it is _PROGRAMMER_) version of the BOFH stories.


that grug say many variables good. make this grug many happy.


put explicit type on variable even more good for read code


I sense a certain age-range here.


grug sense a certain age range in comment


Good stuff, reminds me of PLT Hulk [0], although obviously Grug isn't a fan of powerful type systems.

[0] https://twitter.com/plt_hulk


Holy shit this is the best learning I have read on hackernews, funny and so very true.

> given choice between complexity or one on one against t-rex, grug take t-rex: at least grug see t-rex


This makes me so happy


Never before have I read such wisdom. Young grug impressed.


this is nothing short of amazing


The problem here as I currently see it.

What could be complex to some could be simple to others.

How could grug developer possibly make sense of such a contradictory statement?

My name is Groot!


cf. 'Grunk' in the very funny Lost Pig: https://pr-if.org/play/lostpig/

I skipped most of this post, but the combo of name and writing style seems like a homage.


think hard ==> always need think hard

think hard about think soft ==> no need think hard all time


I can't tell who this article is making fun of.


People who think simple, easy-to-understand systems are beneath them.


not who, what. grug make fun spirit of complexity, but many time grug git blame who summon spirit, see grug summon spirit. grug humility. everyone complexity sometime. grug make fun frontend, put grug write frontend sometime.


Kevin, is that you?


Basically "Simple Made Easy"


Heh, me not grug brained right?


That's a great write up.


Ah, the ample club of wishful thinking.

There are two general ways of approaching software design (and I'm paraphrasing Tony Hoare here):

1. You can write software so simple there are obviously no errors

2. You can write software so complex there are no obvious errors

One thing that escapes "grug" is that achieving 1. often requires more sophistication than their magical club allows. Most well-intentioned "grug" developers will write software so simple that it becomes it's own form of complexity: a giant mud-ball of for-loops, while-loops, variable assignments, and other wonderful side effects. Instead of addressing complexity head-on with abstraction, "grug" will beat "galaxy brain" over the head.

What grug fails to understand is that simplicity isn't easy or familiar. It doesn't mean "sticking to what you know." If often requires being able to reason about programs and to verify that reasoning!

But go ahead grug... keep beating people over the head with the club and hope that the complexity will go away.


As usual the truth is somewhere in the middle. I’ve seen nearly $1m in development cost and a year of time lost to the complexity demon due to big brains playing AWS Service Bingo while a competitor hired a single grug to do the same task.

That single grug got it done in ~1 month for basically nothing, and without the multiple AWS service overhead it ran much faster, fewer resources, and dead simple to maintain. Bigger company bought the smaller one, then proceeded to toss the grug code and continue the big brained approach, as far as I know never reaching parity.

But there were cool network diagrams, data diagrams, and all sorts of new, interesting, and complex technology developers enjoy playing with.

I’m more inclined to side with grug now.


The OP's point is that being able to produce a simple solution to a complex problem only seems like the product of "refusing to be smart" from the outside. From the inside, there's always a delicate balancing act - each step involves choosing approaches that may generate complexity just in different ways. The same data used in different places in the application? What do you do? Objects? Globals? Cut-and-paste. Any of those solutions might be fully justified or might degenerate into a tangle by the time the application is done. Once, if you have a nice, simple working application, you can say "it's all just 'grug'" but that's essentially what they a call "humble brag".


Undoing someone's copy-pasted for-loops and while-loops is called "factoring" and undoing someone's bad abstractions is called "re-factoring" and is about 10x as slow. At this point in my career I have seen way more crappy abstractions and needlessly abstracted code than grug-brain code.

And, to be brutally honest, as much as I love those functional combinators, first-class functions, streams, etc, they suck to reason about.

Sometimes loops are better!


> And, to be brutally honest, as much as I love those functional combinators, first-class functions, streams, etc, they suck to reason about.

> Sometimes loops are better!

That I think is backwards. A loop could be doing literally anything - it probably is futzing with global variables - so there's no way to reason about it except by executing the whole thing in your head. A map (or mapA) or a fold (or foldM) or a filter or a scan is much more amenable to reasoning, since it's so much more specific about what it's doing even without looking at the body.


I like to write loops where they follow the functional map/filter/reduce paradigm where they don’t mutate anything except some initial variables you “fold” over (defined immediately prior to the loop) and which are treated immutable (or uniquely owned) after the loop.

I find this has good readability and by containing the mutation you can reason about it “at a distance” quite simply since further away it’s for-intents-and-purposes pure code.

What you might lose are the compiler-enforced guarantees that a functional language gives you. Some languages give you the best of both worlds - with rust you could put this in a pure function (immutable inputs, owned outputs) and the borrow checker even reasons about things like this within a function body.


I think a loop is ok if it only modifies variables whose scope is the loop.

And that's what map() basically does.


> it probably is futzing with global variables

The opposite is true in my experience. Loops working over variables local to a function is a lot easier to reason about then a series of intricately intertwined lambdas, if they share state. If everything you do is functional then a series of pipelines might be alright, but it might not. If the functions are spread all over the place it can take a larger amount of mental load to keep all the context together.

Code should permit local reasoning, and anytime that is obscured, rather than helped by, abstractions, we incur additional cognitive load.


> Loops working over variables local to a function is a lot easier to reason about then a series of intricately intertwined lambdas, if they share state.

I mean accessing a mutable variable from a lambda is obviously insane, agreed. The whole point of using the functional combinators is that you don't do that.


I think we probably agree in a lot of concrete cases but discussing in the abstract tends to bring out polarizing statements. That's why I qualified most of my remarks, because I'll happily ham it up with a functional style...in tests, and other places where I don't feel strongly about what's going on.

When you are forced to use some accumulating global state, that leaves you with writing in a different style--loopy, if you will--which maybe is a good signal to the reader that something is weird or different, but then again maybe it isn't.

One thing that bit me using Java streams recently is that it completely broke down when I had a concurrent data structure that needed to have the coarseness of locking tuned for performance. Laziness and functional streaming operators had to just go out the window to even make it clear where a lock was taken and released. So loops it was.


> One thing that bit me using Java streams recently is that it completely broke down when I had a concurrent data structure that needed to have the coarseness of locking tuned for performance. Laziness and functional streaming operators had to just go out the window to even make it clear where a lock was taken and released.

Well yeah, that's very much expected. If you're even talking about locking, stream transformations aren't a good fit (except maybe if you do the microbatching/orchestration style where your stream transformation steps operate on chunks of a few thousand items - and even then your locks should be scoped to a single transformation step).

(Now I'd happily claim that for equal developer time one can usually outperform a locking-and-mutation implementation with a stream-processing implementation - not because the low-level mechanics are faster but because it's easier to understand what's going on and make algorithmic improvements - but that's a rather different matter)


Not true in reality, so your theory must be wrong.


I've been doing this professionally for over a decade and it's been true in my experience, shrug.


I've seen a massive, "simple", abstraction-free function of 500-1000 lines take smart engineers quarters to safely "factor". I've seen this same thing happen a handful of times. Big spaghetti function requires someone to build a new test suite from scratch so that they can refactor safely, because everything ends up having spooky surprising distant effects, because there isn't any abstraction to contain things.

They're dead simple, there's no abstraction, its just loops and variables (and state, which is the real killer). But they're impossible to reason about as a result.


Sure, this happens. But imagine it was 10 times that size and factory poorly. If you had the same lack of tests, I'd contend it'd be a worse nightmare, because you have to first reverse engineer the confusing old design, unconfuse it, and then incrementally refactor it.

You can write bad code in any language with any constructs. The more code you do write the worse bad ideas just metastasize like cancer.


I personally almost always use loops (in Java) unless I know the dataset being dealt with is small, unless I'm writing stuff in Scala where maps and flatMaps are first class, but diving into Scala is its own can of worms.


I think you found the head of the nail, so let me hit it:

”Simple” code is simple to reason about, but it is not as expressive.

”Complex” code is just as simple to reason about, but being more expressive, it requires more intimate knowledge of the language features and abstractions used.

Then there is ”bad” code, which is confusing for reasons other than domain complexity.

Sometimes complexity just gets handed to a developer even if they say no, and that doesn’t make their complex code bad code.


> ”Complex” code is just as simple to reason about, but being more expressive, it requires more intimate knowledge of the language features and abstractions used.

> Then there is ”bad” code, which is confusing for reasons other than domain complexity.

I've previously seen these two summarized as "the difference between complex and complicated".


> Instead of addressing complexity head-on with abstraction, "grug" will beat "galaxy brain" over the head.

the argument made in the piece is more nuanced than that. The author points out that you often cannot address complexity head on (in particular not with abstraction), because you don't even know what your complexity looks like, as the author says, complexity isn't trivial to see.

This was the old problem of inheritance as a paradigm which tried to anticipate structure of programs and code when often you can't anticipate what shape your program is going to take, often leaving you in dead ends and wrong hierarchies or taxonomies.

The author isn't saying to not abstract at all but to not do it early. Casey from Handmade Hero had a similar style he called, IIRC 'compression' based programming, implying that you write your code, and as you go over it again and again you see where duplication pops up and only then you factor it out into some higher abstraction.


I can see where one can build upon that intuition. If one replaces “abstraction” with “indirection” it becomes apparent what is happening.

When programs are defined in terms of operational semantics, that is procedurally, it must still be reasoned about in some way if your goal is to write a program so simple there are obviously no errors.

Patterns of indirection are not abstractions. It’s a fine practice for controlling indirection in code! And highly effective. But it’s not what I mean by abstraction.

Abstraction enables new layers of semantics by completely encapsulating complexity. Think of function application in almost any procedural language that is compiled to some target machine language. The compiler has to generate a lot of code in the host language to manage memory segments, return pointers, etc. As the programmer using the language you don’t even think about it at all.

One doesn’t arrive at such an abstraction by writing a bunch of spaghetti code and encapsulating common patterns.

And that is often how one arrives at simple code. It’s not “simple” because it doesn’t require anyone to learn anything or work hard to understand it. It’s simple because once the abstraction is established and proven you’re free to think about bigger, more interesting ideas.


> One doesn’t arrive at such an abstraction by writing a bunch of spaghetti code and encapsulating common patterns.

No you don't get an abstraction like the one you described. But you often can hide a huge mess of complex code behind an interface and make it simple for your part of the system to deal with it. e.g. Think of the interface to a search engine or a machine learning system.

> It’s not “simple” because it doesn’t require anyone to learn anything or work hard to understand it. It’s simple because once the abstraction is established and proven you’re free to think about bigger, more interesting ideas.

In many systems there aren't any more interesting ideas and in that case making an abstraction where you work hard to understand it is a liability. People don't have time for that. People particularly don't have time for changing it when the outside world changes such that the abstraction no longer fits. And the outside world changes quite frequently.


This. Develop the logic and flow for all of the thing, make it big, code it simple and even redundant, but really basic and simple. That's your "proof of concept". Pass two is finding and factoring out all of the redundant stuff into functions and libraries, grouping like with like. It's like in relational database design - first you grab all the fields needed, then group them and normalize them.


Whoa now, I was with you until you seemed to imply that "simple" software means shitty code. Simple in this context means a solution with as little complexity as possible. Hopefully written elegantly and carefully, with lots of thought for future maintenance and readability.

Maybe you use a 10-year-old, constantly maintained and thoroughly tested library to do 90% of the work instead of writing everything yourself.

Would it be faster to compute some end state without intermediate calculations? Probably. But how about we just spin a loop forward instead if it's more accurate that way and easier to understand.

What if we cached values, then made sure the caches are always in sync? That should speed things up. Well, maybe we'll write it without caching and see how that goes first.

How about an intricate system of locks and serializable transactions so that multiple requests can run at the same time? Or maybe we just queue things up and run them one at a time.

Nothing to do with nested for loops.


> Hopefully written elegantly and carefully, with lots of thought for future maintenance and readability.

Those are fine things to value even if they are highly subjective.

You might think it’s easier to read a while loop with a handful of mutating variables and a few branches. I might disagree.

I think it’s easier to read an accumulating function that uses traverse.

It’s wishful thinking that we should both hold the same values or that only one of us is right.

> big brain type system shaman often say type correctness main point type system, but grug note big brain type system shaman not often ship code. grug suppose code never shipped is correct, in some sense, but not really what grug mean when say correct!

This big brain ships code every day.

Also streams big brain work in big brain language once a week to show people that it’s not the work of big brain shamans.

Big brain is happy to answer questions and help people learn.


> a giant mud-ball of for-loops, while-loops, variable assignments, and other wonderful side effects. Instead of addressing complexity head-on with abstraction

I very much prefer to work in a codebase of poorly written loops and variable assignments rather than one with poor abstractions.

Poor abstractions can harm way, way more than spaghetti code. Their harm usually spreads out through system.

Imagine this:

* 1 single poorly written function which takes a number of inputs and has an output, but internally it's long and messy.

* A bunch of abstracted entities, which interact throughout the system, but are poorly designed.

The complexity of 1 is isolated to the implementation. There's a nice encapsulation. Removing/Rewriting is easy.

But 2 becomes a complete mess which is not isolated. You'd have to skin it layer by layer. Thats way more scary to me.


>There are two general ways of approaching software design (and I'm paraphrasing Tony Hoare here): 1. You can write software so simple there are obviously no errors 2. You can write software so complex there are no obvious errors

And both are neither here, nor there, regarding what the article talks about, which is basic advice that always holds: avoid complexity, say no whenever possible, opt for 80/20 solutions.

>One thing that escapes "grug" is that achieving 1. often requires more sophistication than their magical club allows.

Which is irrelevant to grug's point, as he doesn't pretend to advocate achieving 1.


> Most well-intentioned "grug" developers will write software so simple that it becomes it's own form of complexity: a giant mud-ball of for-loops, while-loops, variable assignments, and other wonderful side effects. Instead of addressing complexity head-on with abstraction, "grug" will beat "galaxy brain" over the head.

Hah. Related, i became disgruntled with my previous favorite language for this reason. It promoted this style of development. Surface level simplicity built into the language, with minimal tools to actually manage the complexity that is inherent to the problem space you're working in.


Alright I'll bite :)

Python? Java? C? Assembly?


My guess would be Go since I've heard that as a common complaint.


That would make for some interesting posts. "How X promotes wishful simplification," for various values of X. I imagine some common traits would be probably be "X makes simple things simple," with "it makes hard things look like simple ones." But there's also "X has defaults that Just Work" with "until they don't." I don't think I'd include assembly, C, or Java in those categories. Python? Maybe; "batteries included" is starting to break down. Go? Yep. Goroutines and channels usually work well for simple stuff, often for more complicated, and the core libraries follow that pattern. Model failures tend to be buried and solved when the service is turned off and back on. Rust? Don't think so; its schtik is making hard things into blockers. Web frameworks? Yes, oh very yes. Lisp? "Simple? Blink blink. Oh, you mean elegant."


Not who you're replying to, but Tcl was that language for me. Dead-simple syntax, documentation, and behavior. I loved it. But also dead-simple tools. And if you wanted to do anything that wasn't a "solved problem" already, you had to write your own library.

When I switched my focus to Python, everything was more effort but I could also do _much_ more with it.


Interesting - I found TCL to be very much the opposite, because it makes "meta" things so easy. Like, if you want to write your own control flow construct, you can go right ahead, it's like Lisp except that it's easier to write macros.


> And if you wanted to do anything that wasn't a "solved problem" already, you had to write your own library.

For which language is that statement not true ?


Their point is that the language they’re discussing requires you to write that library far more often than other languages, as a second order effect of few using it. But you know that already, you just want to make a semi-snarky reply instead, I guess.


No, I just sincerely don't believe that library availability indicates anything except library availability. It tells nothing about a language other than how man libraries are available.


I was purposefully vague heh. Trying not to start a language flame.. i do that enough in the actual language threads lol.

I will say someone here was right though. :)


Python is this. Complex made easy (in Rich Hickeys terminology)


> a giant mud-ball of for-loops, while-loops, variable assignments, and other wonderful side effects.

Woah woah. This is literally the kind of complexity grug wants to avoid. Simple doesn't mean using the fewest language features or keywords, it means simple to read, follow along, reason about, and debug. Abstractions can aid in that (like using a third party library!), until they are imported/implemented aspirationally and thus often unnecessarily, and sometimes resulting in abstractions for abstractions, selective application of the abstractions vs fresh implementation, etc (...and thus AWS)

At no point does grug argue that you should stick to what you know, he just says you should trap your complexity in crystals ie literally suggesting to use the Good Abstractions, When They Make Sense.


^ sound convince and probably very big brain right at times

however!

beware apply advice put form data from web page into database with many layers abstraction not needed! grug see many time!

fear of looking dumb (FOLD) great danger in such conversations and beware!


You're focusing on coding style but it sounds more like hes talking about business requirement.

If we are talking about coding style, then the things you have identified as actually not simple are by definition not what the article favors.


I think the post actually touches on this? It doesn't say that you shouldn't abstract, just don't abstract too early.


I think of it as being more of a spectrum where abstractions get introduced as they're needed. The case where one has thousands of lines of imperative code that is terrible to debug and probably not unit tested is one where the correct abstractions were never introduced at the place and time they were appropriate. Getting this right is almost impossible, that's why we have refactoring practices.


As the idiom goes: Can’t see the forest for all the trees.

Sadly it’s often difficult to argue with someone using the complex=bad-club, because everybody knows complexity must be avoided - at all cost!!

What one should understand is the essential complexity of the problem, then design your solution around that, to control where the complexity goes and to not introduce accidental complexity. Blindly going complexity=bad will result in it popping up elsewhere later.

Many people as you say don’t even know what it means and use complex=bad for all kind of things they disagree with, including choice of tools or libraries. Using a library you don’t know is not complex, it is difficult, also something that should be avoided but that doesn’t make it the same thing. (Adding libraries and mixing competences can of course add complexity also, but let’s not get pedantic…)


That's the classic "only coder" mindset. Instead of being uncomfortable learning about software architecture, side effects or even pure functions, tries to solve every problem with for and while loops.


Sir, may I remind you that when you get down to it, your architecture and higher order functions are all made of branching and for and while loops, and branching and for and while loops are all made of goto.


And gotos are made of jump instructions which are made of NAND gates, transistors and silicon. Good abstraction is all about hiding the details that are not essential to the problem at hand.


I wanted to say jump instructions at first, but gotos are more hated and considered harmful in these quarters.

You hide the details but you never forget they’re there, lest they spoil and start smelling really bad.

(Also, apparently the MOV instruction on amd64 gives you Turing completeness as proved by that crazy compiler, so gotos may be made of MOVs sometimes, but meh)


A particular implementation might use those things, and I will meet it on its terms when I need to. But that's a mechanical convenience, nothing more. My top-level code is written for people to read, and only incidentally for machines to execute.


There is a lot of wishful thinking that goes into galaxy brain code. It assumes you know with almost complete certainty what rules are bound to change and where extension will be needed, that the next developer will be able to understand and respect the beautiful elegance of the design instead of hacking at it to achieve the desired result, and that the code will be long-lived enough to justify the initial investment. Grug coders aren't always inexperienced; many are actually repented architecture astronauts.


Then there's the Super Mario developer, I putta all 50,000,000 lines of code into one a fantastic pasta bowl file just for a you!


could you give an example of (1)?

Only because i have run into weirdness with printf calling malloc, back in the day. Even hello world makes me a little nervous about those claims.

But I'd love to see a sample with explicit assertions about the environment, so I could be sure there are obviously no errors.


Software in category (1) and (2) can be of any relative size.

1 could be 50,000 lines, 2 could be 500.


It will go away -- to other devs, who will awkwardly and painfully route around grug, adding loads of complexity. Not grug's complexity, though!



Wise fellow, that grug.


complexity isn't bad, redundant complexity is


Yes, people often confuse inherent problem complexity (maybe domain complexity) with unneeded complexity introduced by code architecture.

You can reduce the latter, but often you can't reduce the former. That's why it is best to keep abstraction for code architecture at bay.


You can reduce inherent problem complexity by picking a simpler subset of the task to solve (for example only support a single backend rather than plugins and interfaces, or exclude complex problems like text ligatures and glyph shaping). The resulting program will be shorter, have less indirections (making it easier to trace statically, and require less scrolling between functions and jumping between files), easier to make correct and fix bugs, and easier for outsiders to read and learn the fundamentals of a problem domain from. And once you've learned a conceptual understanding from simple programs (JACK2), you'll be better-prepared to learn big complex programs (PipeWire), which are harder to learn not only because they deal with more complex problem domains (latency compensation, multi-device synchronization, video), but also because they suffer from dynamic dispatch (requiring runtime debugging to identify which type implements an interface) and are too big to learn the architecture by merely looking at individual files (though I hope projects get better at teaching architecture: https://matklad.github.io/2021/02/06/ARCHITECTURE.md.html).

Unfortunately, in some cases complex programs adopt fundamentally different architectures (dynamic dispatch and plugins, different data structures, extra modules like Harfbuzz and Pango, etc.) from simpler programs. Perhaps they have reasons for doing so, but in practice I find myself still confused even if I understand the simplified concepts and can write useful code using them.


sometime grugbrain sometime complexitybrain


chef kiss


Too difficult to read, can't be bothered.

Does this style really appeal to anyone at all?


It appeals to me, absolutely, because I am familiar with and have a positive impression of the Grug character.


I didn't think reading it was very hard at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: