Hacker News new | past | comments | ask | show | jobs | submit login
Enterprise Software Projects Killed the Software Developer (javahippie.net)
150 points by javahippie on Aug 30, 2021 | hide | past | favorite | 143 comments



Elegant and clever code wont live through a maintenance cycle.

I'll take a software developer who writes and structures code so change requests and code are written in a way that the DSL is the same across the organization. This makes changes easy. Clever people should be writing libraries or doing research.

Don't kid yourself, you are either the guy who builds the building and its easy because its greenfield, or you are doing remodeling and the hard part is making the upgrade fit in the building and not look like shit.


Excellent comment. This is true, and I think golang embodies the idea (or attempts to). Enterprise wants consistency over anything else.

That said, there is a type of cleverness that can be brought to bear on Enterprise systems that, for example, take a simple, unidirectional data flow into account - something that is rather abstract, but which can and will thwart lots of complexity down the line.


Can you elaborate a bit on the golang part? As someone who dabbled with Go but never found it too alluring when comparing to other options (if I wanted ease-of-use I'd go with Python, if I wanted performance I'd go lower level - C++/Rust), I'm interested what you mean by it. Go did find a footing in the industry and a lot of cloud infrastructure relies on it, I do think it's the most interesting option out of compiled garbage-collecting languages.


Go finds a really good position in between. The language itself restricts your ability to get too crazy (with for example types).

I hate it, but I can totally see how it would work really well for keeping your architecture simple.


Go removes non-value added decisions as teams scale:

- formatting built in. There’s one way. No preferences needed

- language constructs are dead simple. There’s one way to use them, and they’re verbose. No cleverness encouraged

- patterns are straight forward Read the standard library code if stuck

Every Go project resembles every Go project. It’s great.


Plenty of languages have traveled that path, COBOL, RPG, xBase, Visual Basic, Delphi, 4GLs, Java, and now Go.

The problem is then you get some enterprise architects that go crazy with the design space.


The fix is to properly document your code. "Clever" code is an anti-pattern, but there's no need to make your code less elegant or less properly engineered than it otherwise could be. Hacked-together, low quality code is even less maintainable than "overly clever" code, so it's worth trying to avoid that.


Actually - I've found the "elegant properly engineered code" a total NIGHTMARE to deal with. Reflection, endless hierarchies, complexity on complexity.

The PHP script kid basically writes a linear program (tons of duplication) with no crazinesses. Yes, it's "low quality", but if you do a few function out refactorings you've got something very easy to work with.

I just wish there was a standard template - access check, runtime complexity at code comment at top of function (ie, O(1), O(n), O(n^2)) some reasonable comments, error handling, done. Throw in some unit tests if desired.


"Reflection, endless hierarchies, complexity on complexity" sounds like the exact opposite of "elegant properly engineered code"


You may not have worked with folks really into "properly engineered code". But it is amazing sometimes to see how they can twist language into pretzels by knowing every trick and pattern out there. And yes, somewhere in OOP land this is all proper engineering and the abstraction on abstraction on library on library is the key.

The worst is when they start talking about future this and that. Trouble!


That isn't properly engineered code.

And if you dig into all that code usually the designer didn't understand those tricks and patterns as well as they thought they did and were trying to teach themselves along the way.

You can often throw away 3/4 of the code, throw away 3/4 of the "requirements" which aren't being used and wind up with something that is actually simple and elegant and solves the problem, but doesn't attempt to construct anything as grandiose. Of course YAGNI is a whole lot easier in hindsight when you can really look over the course of years and see that nobody ever needed it so it never should have been built and throw it out.


Such features to throw away /s

(You must not actually remove any of them. Enterprise jokers will want all of it.)

- security (the most forgotten until it bites you)

- GUI flow complexity (client requirement you cannot touch)

- distributed state management with logical consistency guarantees (horizontal scaling)

- ability to quickly adapt for new features

- schema upgrade capability (see above)

- advanced database queries

- event reactivity and/or scheduling (logging, security, notifications)

None of the above is quite actually given to you on a silver platter by any one library really, it a combination thereof, and they're rather hard to grow bottom-up organically.


I think that the parent comment was expressing a disagreement with your definition of "properly engineered code". I agree with them -- simpler code is properly engineered, lots of complex magic is not.


I've definitely seen my share of impossibly convoluted code written in the name of "best practices" (often done because we need to "future proof" against some imagined future use case), but I've also seen my share of properly engineered code that was elegant, testable, composable and easy to work with (code that "cleaved the problem domain at the joints" if you will).

But what I think the author meant is that the incomprehensible abstraction soup is at least in part a result of the "enterprise development" method. If you are solving a particular problem for a particular application you tend to solve the problem in a straightforward way as possible (if you're any good at least). But if you an "enterprise architect" you are trying to create architectures that both solve no problem in particular and all problems at once. So you tend to get complex abstractions that are both complicated to work with and don't cleanly deal with any particular implementation.


Well of course, that’s why it was put in quotes. The problem is exactly that. Over and over again you’ll come across architectures that are massively over done but claim to be “elegant and properly engineered”.


yeah - the people actually keeping it simple (not designing for 100 future situations that will never actually show up) don't go on and on about elegrant properly engineered code. In general (exceptions exist of course) the entry level person writes much simpler code, focuses more on what needs to happen now.

etc


In my experience it's more likely the entry-level person who will write overly complicated code (or create such systems in general); they haven't yet learned how to identify potential approaches and to choose the most suitable one, and might not appreciate how many of their problems they don't actually need to solve at all.


I full heartely agree. Bonus point for globals you can set checked breakpoints on.

Linear long functions are victims of bullying. Sure, it is a balance act, but I take bad linear code over deeply nested code any day. When trying to figure out how code works I can't keep much depth in my head, unless it is some tree walk on a data tree.


Exactly this.

The person who knows all the "engineering" - you have this giant mutating pile of impossible to fellow (but maybe 5% faster) code.

The person who just codes enterprise style, long but done and easier to follow and edit.


I guess it depends on how you define "enterprise" but I always associated that word with unnecessarily bloated code. Or even worse, trying to reduce the amount of code by pushing more and more to ever more complicated declarative configuration files. That is mostly because when I hear "enterprise" I immediately think Java EE, which to me was the exact polar opposite of easy to edit and follow.


I think the hidden message in your comment is "duplication over abstraction" - and I agree! Often duplicating things is preferable to complicated abstractions.


Sandi Metz has a quote that was something like this "We don't tell me developers about avoiding duplication because it's so important, but because it's the only thing they understand".

Duplication is much better than the wrong abstraction


I find functions etc fine. But the problem is a lot of code let's you go absolutely crazy with complex design patterns - and folks who like to be "engineers" and "architects" go totally nuts with these. There was a fad I think around OOP (maybe with Java) which then bled into a lot of places. Yes, you can do crazy stuff, generics, meta programming, reflections etc.

But for example, if go just beefed up the standard library (especially around data structures - give us everything - and common functions there) I think they could have pushed a bit longer in terms of holding off on generics.


> The fix is to properly document your code.

I disagree that is the fix. I've seen horrible code that has extremely extensive documentation, and nice code with minimal documentation. I know which I would rather work with.

Documentation has its place, particularly for high level concepts and shared data and interfaces, and tricky, exceptional, or surprising things in the code itself. But it's a poor crutch for bad design or code.


This is 100% correct. There's no amount of writing about code that can make it better. In fact, if you can write well about what the code is doing, you can write the code well. It's a mystery to me that anyone expects people who can't write code that's clear and simple to then be able to write clear and simple prose explaining it. It's actually much easier to write code that only the computer (compiler or runtime) can understand, but it doesn't care if you name things meaningfully or if the structure reflects the problem domain.


I always wonder if people who talk like this have ever actually seen elegant and clever code.


Yeah, it's weird... like, if someone says that a math proof is "elegant and clever", I expect that to mean it is going to be a non-obvious way to make a complex problem that might have had a brute force solution easy to understand, where once you see it you go "oh wow, that's elegant and clever!"; but, somehow, there is an entire group of people (whom I refuse to work with, fwiw) who seem to believe that "elegant and clever" are bad things and wish for a world when everything is five full pages of algebra and number crunching... that you probably got wrong somewhere as there are now an infinite number of repetitive opportunities to make a mistake.


Everyone has seen elegant and clever code, but it's really not necessary when you're writing a CRUD API.


Then stop writing CRUD APIs; make the compiler do it instead. If you're doing rote, boring, assembly-line programming, then you're doing the compiler's job.


Imagine you’re working at a startup and trying to solve a tough real-world problem by creating software that involves writing some CRUD APIs. You bring on someone to the team who says, “we gotta stop writing these pointless CRUD APIs and write compilers instead.”

I’m not trying to be dismissive, but I think this actually well illustrates the central tension between engineers who are more interested in the business problem and ones who are more interested in solving technology problems. I know that when you get to a later stage as a company you need both kinds of engineers, but at an earlier stage company you have to ensure all of your engineers are of the former kind and not the latter kind or you will probably not succeed.


I work as a consultant, and we are also looking for talented people interested by solving business problems.

We actively avoid the technology-focused kind, because we know they will not be able to adapt to the work we do. To be honest, we write a ton of CRUD apps, but anyone who would come and say "let's write a compiler" is guaranteed to get funny looks. Even if you are extremely talented and can deliver to the same pace as we traditionally do, you will probably fail to consider one or two "little" things that would turn out to be fundamental requirements! Nobody is impressed by half-working cleverly written software.


It sounds like you're categorizing me as one of the people interested in solving technology problems more than business problems. In fact the opposite is true, to the extent that I've become a go-to person for questions on the domain I'm working in, even without any programming context around it (e.g. a senior technical review of an Excel document in the domain). I've become a recognized subject-matter expert in multiple business domains. If I'm not focused on solving the business problem, I don't know who is.

In fact it seems like it's quite the opposite: people who think in terms of loops, conditionals, and objects seem to be happy writing repetitive CRUD APIs all day. People who think in terms of business logic want to write business logic. You don't need to write your own compiler (it's weird that this became the thing I supposedly suggested), but you do need to develop good abstractions and data structures that fit your domain to get your code looking more and more like just a spec in the domain language.

The "don't reinvent the wheel" people who want their programmers to just bang out repetitive API code all day aren't any closer to the business logic than I am. They just don't understand what makes good software, and they don't realize that the wheels that are available to them out of the box actually suck for their task and always need some modification.


For me elegant and clever means that 1) my code won't get in your way and 2) can be thrown away easily.

In return I ask a bit more time so I won't self-couple my code with introspection magic or make it undebuggable with weird dynamic design patterns nor will I make wormholes to so that A can communicate with B.

Deal?


deal. code should be written to be easily replaceable, not extendable

to me, more often than not, it means simple


In my enterprise experience the remodeler usually just ends up bolting more shit onto the existing heap of shit.

Dev time ends up taking longer and longer, wasting more and more money.

Its funny you mention greenfield being easy in the same sentence - because IMO starting over is often the better option than the shit bolting but no one wants to go down that route because they need it next week. A whole bunch of small shit bolting projects every week instead of sitting down and cleverly engineering better broader solutions.


The best enterprice architects I worked with don't know shit. They can plot 3 boxes with 4 arrows and call it a day. They can take credit for working system, it doesn't matter. What really matters is that they don't waste time on useless preplanning, don't give you (as tech lead/senior dev) solutions to implement, but problems to solve. They don't know how to solve integration problems, but they know how to click to schedule teams meeting with people who take care of those systems. Solution is usually easily agreed between tech people during those calls/folloups. Architect is able to wrap it in power point presentation with 2 boxes and 1 arrow and call it integration solution. It sounds like a joke, but I'm serious.


Speaking as a — now former — architect, I’d agree! The best use of my time always turned out to be dealing with scope and feature risk, areas of ambiguity, and governance. Part of this is because very few software projects hinge their success on complex and elegant solutions to thorny conceptual problems, but also because competent developers generally don’t need an architect doing much more than sketching out the broad contours for them to get in and design/build the system; what they really need from the architect is client alignment, well-defined feature scope and dependencies, locked down RACIs, realistic team sizes and schedules, and just enough governance to keep everything on track without overmanagement. A successful architect, in other words, is technically-competent enough to figure out what has to go into the solution (which crosses many disciplinary boundaries) and what the overall project size roughly looks like, but the second they try to create detailed technical plans they’re wasting time and money.


Thank goodness to hear from some folks with some sense here :) Feature creep risk is REAL. For some reason the new swapped out system all of a sudden has to support 20 incompatible features!


Hmm, yeah. The worst ones are the ones that think they actually know coding, or how a system should be implemented.

But mostly it’s annoying since we spend hours or days going around in circles until we arrive at the obvious solution (which the engineering team proposed at the start).


" Architect is able to wrap it in power point presentation with 2 boxes and 1 arrow and call it integration solution. "

That sounds very true. I am always joking that I could probably "architect" our whole Fortune 500 company in a few days.

In sense an architect should be mainly coordinator and communicator. Find alignment and make sure everybody does their part. Detect problems and get everybody back on the same page.


I agree with this. They know most of the parts, what works smoothly and what needs more love, and they talk to people. Then they can give an informed advice with a few UML diagrams and a few code examples.


We must've worked with the same architects!


There was that interview with Peter Norvig - https://www.youtube.com/watch?v=_VPxEcT_Adc - I believe there he said something roughly like "before programming was about composing algorithms, structures etc. - and engineers used ideas from SICP - now it's about composing 3rd-party libraries and solutions - and engineers don't read manuals, don't worry about how systems work internally etc." Pardon if I stretched the meaning too much.

I think this more modern approach has some important pitfalls which we're uncovering now and getting hurt by them.


Did Norvig then favor the current Leetcode interview in response?

(Which would then seem like the most dysfunctional example of institutionalized playing to the metrics that our field has ever invented.)


ah, yes and no. You really can't know the internal workings of everything, it's not practical anymore.

in other words: you can stand up a web site in less than an hour that will handle enormous loads across the globe, but not if you pause to read all the library code that goes into it.

fortunately, humans are pretty good a having faith in things


As a self taught person who likes to dive deep, I’m starting to realize why I fall behind some dude that watched some YouTube videos, did leetcode and got a job somewhere.

I follow a tutorial and if I’m not understanding how the guy got to knowing that the property we should use in that scenario was X, I think I’m not understanding anything and that I won’t be able to build anything, so I halt trying to figure out, but I’m so inexperienced that I can’t understand the docs anyway.

What I should be doing is putting that code in my notes and moving on having faith in that code.

I was thinking of opening an Ask HN about “how much of your code do you actually fully understand vs how much is just copied from somewhere “


> how much of your code do you actually fully understand vs how much is just copied from somewhere?

100% is fully understood. Every single line. A very small percentage is copied from somewhere (usually adapted in some way), but even then: I understand every single line. And I expect the same of everyone on my team. If I'm doing a code review and someone says "I don't know; I found this code online" it does not go over well. I might have them delete all the pasted code and re-write it with me. If they keep pasting in code they don't understand, I may remove them from my projects.

I'm obviously a lot less strict about understanding libraries we import (although it's still important to develop a strong understanding of what they do, and their limitations, over time). The main difference is that someone else is supporting that code, while we have to support code that we pasted from "somewhere online".


Well as I said I don’t have enterprise experience but from what I see, all the stuff that is not 100% creativity of a new process seems to be tying libraries together.

I’m learning spring because my company is Java heavy and I’m hoping to do a lateral move there. The whole thing is an arbitrary rule set that the library writers agreed upon. Yes there are many concepts of OOP involved but I watch tutorials and the guy says “you need to write auth.userDetailsService and I can only think “yeah, great but how did you get to that conclusion? Where is the hidden recipe book I’m not seeing? How was I supposed to figure that out if I wasn’t following this tutorial?”


So "your code" only means the stuff you typed in personally? None of the libraries you use, the operating system facilities, the runtimes or compilers, none of that is "yours"? So when it breaks or does something you don't understand, then what? File a bug report, call your work "blocked", and go home for the day?


Yes, "my code" means the code I am responsible for, which includes the rare instances where I copy and paste from the internet. What does "your code" mean to you, if not that?

Correct, the libraries and compilers I use are not "my code". If there is a bug in some library method, I simply don't call that method and do something else instead. This may sometimes mean throwing the library away. Sometimes I will submit a bug report, if the process isn't awful (but it usually is).

I guess I don't understand why any of this is surprising to you?


To be honest, you’re doing the right thing: you should stop and try to figure out where things come from, why they’re there, and what they’re doing. Usually, if you’ve got “magic coupling” from a framework, that’s the part that’s going to bite you or it’ll be where you need to dig in for some critical piece of functionality. It’s not unusual for reflection-, plugin-, or attribute-heavy code (and I’ve abused all those systems heavily in the past, often at the same time) to break in weird and unexpected ways, especially in tandem with other frameworks. Sure, cut-and-paste devs look like they get there faster, but they won’t be able to diagnose and debug regressions, because their knowledge is just another cargo cult.


All the answers I’m getting are actually confirming my theory.

For vanilla code I get it. Vanilla Java I can totally understand where things came from and why. It’s the libraries that get me. How am I supposed to know that the library expects X? Is a very common question I have.

I understand it at the end but that’s because someone’s given me the answer. But I couldn’t derive that by deduction if I wanted to because I didn’t even know that was a thing.

And all the guys giving the tutorials seem to be running off their memory but they don’t tell you how they figured it out


I would highly recommend finding an online course on Algorithms and Data Structures.

I am a self taught developer, and I did one a couple months back and it really opened my eyes to a lot of stuff.

Look for the one on Coursera, it's free and really good. Just actually do the assignments. It's a lot of work but really worth it.


There is "From NAND to Tetris". There are not-too-few people with knowledge "from quantum mechanics to UI". At some point you get accustomed to actually know everything :) and you're annoyed with details which are excessive. The non-essential ("incidental") complexity.

You can probably make website faster if you learn 3rd party tools for that... but then you won't be in the position of Paul Graham in Viaweb, where a small team run circles around competition, being that much more productive. I suspect Apple's success is largely because Woz knew ins and outs of the system.

Your important here word is "anymore". Maybe we're at the wave of complexity which will wane. I don't actually think it will last.


For anyone looking for a timestamp, that part starts around 37:12: https://youtu.be/_VPxEcT_Adc?t=2232

Great interview.


Why would it be a problem to write code in a way that's easily understood by others reading it? If your code is so complicated that a little bit of documentation can't explain it or at least help get people started then you're probably being too "clever" with your code and need to simplify it a bit. Simple code doesn't mean it has to be under-engineered or non-performant, and creating a culture that doesn't take the time to help others understand what's going on is how teams get to the point of depending entirely on 2 people to do all their enhancements/fixes.


Because sometimes the clever code may be easier to understand by an experienced developer even if it is less accessible to a less experienced developer. Non-coding example: "Two plus Three times Five" is easy to understand, because you don't need to know math symbols. But to anyone who knows math, "2 + 3 * 5" is easier/quicker to read.

For a coding example, in Javascript you have things like arrow functions that make the code more concise. But that is also harder to read for someone who hasn't picked up on them yet (which may have been the case when they were first introduced).


It is ambiguous,

"Two plus Three times Five" as a sentence reads and is naturally processed from left to right for a result of " is twenty-five".

But to anyone who knows math (precedence), "2 + 3 * 5 = 17"


Yeah. That's a trivial case but I'd still probably use parentheses). For more complex precedence operations I definitely would.


The order of operations doesn't change because the equation is written out in English versus mathematical notation.


"There is no ambiguity because my context is the only context"

I learned a few years back that lawyers will give you a bunch of "ands" and "ors" in an expression without any concept of precedence. That doesn't necessarily mean left to right works, either.


How do you say and/or spell out parentheses?


Easy: Take two plus five and multiply it by six


That's watered down and overly simplified.

We can have an argument over ternary operators and new users and I'm going to argue that they're part of the common vernacular of computer science and everyone needs to sit down and learn them.

But nested ternaries three levels deep are horrendous. Stick with one level and simple expressions and keep it readable (assuming the background of just understanding the operator). Don't overuse it.

Similarly lambda functions are part of the vernacular, everyone is going to need to learn that necessary amount of complexity. But at some point nested lambda functions containing lambda functions are going to get difficult to read and reason about.


I agree with you on language syntax choices, but as far as the articles examples are concerned it seemed like the "clever" solutions to them were more about OO architecture choices via abstraction for example. In which case, if it's not a codebase that you have experience in, then those architecture choices can get very confusing if not properly documented


The real problem is that the while the same code might look obvious to John Carmack, it might not look obvious to $outsourced_bodyshop_ressource_0443434.

Then you start coding for the lowest common denominator.


Recently we hired a consultant for our infrastructure. I wrote the code for that infrastructure using best practices I learnt from books, workshops etc.

While some ppl from my team have huge issues understanding it, the consultant read it ONCE and knew everything. He was simply a good dev that had a pleasure working for other big shops that wrote the books I read and lead those workshops I attended to.

So its like always a matter of „Do I hire cheap HR resource or an expert?” question.

Nothing more, nothing less. The issue with writing smart code is that there is not a lot of smart devs. Thats why we always derotate to lowest denominator.


Are you the good guy in this story? Writing code that no one else can understand unless they've read multiple books to push through the complexity? Do you write in Java by chance?


Good guy? Depends how you define that. And it would be challenge to write infra code in java..


In my opinion, writing code that is so complex it takes an expert to understand it, is undesirable.


But where do we draw the line? If someone does not understand the idea behind interfaces or proper exception handling, who is to blame? If you write Java and Streams are the natural solution, but the developer you work with never bothered to look them up, because they are "complex" and "FP-something"?

These are polemic examples, but there is a line where sticking to the lowest common knowledge will hurt your best developers by either sucking the joy out of their work or by forcing write them a lot of boilerplate code. And I've heard of teams, where the whole team would ban features of a new standard of the language, because they've written code for 20 years without them before.


> But where do we draw the line?

Well, we can draw the line based on who is going to maintain the code after we are gone. If we know that the organization is only going to hire rockstar ninja experts, then we can make code that is readable for rockstar ninja experts. If we know that the organization is going to hire mediocre developers, then we need to make code that is understandable by mediocre developers. If you think mediocre developers can't understand streams, then don't use streams. (Personally I think your examples of interfaces, exception handling, and streams, are all things that have their time and place, so it would be fine to use them when needed, and just expect that developers after you will know/learn that stuff.)


> If we know that the organization is going to hire mediocre developers, then we need to make code that is understandable by mediocre developers.

There's a secret third option: The organization could just hire good devs.


> There's a secret third option: The organization could just hire good devs.

That's rarely up to the individual developer to decide. If you cook a complex bowl of abstraction soup, knowing that mediocre developers will maintain it after you, that's on your conscience.


The evaluation objective for those medicore devs should be to „get up to speed” or the company will never improve and soon become overrun by competition.


Aha. And should every single company employ above-average developers? How does that work out, mathematically?


I dont know a corporation in IT that doesnt have evaluation programs. All management expects you to get better over time.

So using the opportunity to learn from a better dev is a great objective that positively will impact not only you but also the company you work for.

On the other hand if developers stagnate and dont improve -> this is a red flag for company growth. And each corporation wants to grow. Such dev is just a bad hire and will tank the company.


Of course developers should learn and improve. Nobody is arguing otherwise. Yes, people can improve, that doesn't change the fact that the average company employs average developers. If your company hires average developers, then you need to maintain your codebase complexity at a level that is manageable by average developers. All this talk about "bad hires" and such is really missing the mark here.


IMHO If the company hired you for your superiour knowledge and expects you to deliver. You should deliver. Its really not your problem they opened hiring for someone out of their league. Even worse if managers agree to your improvement plan.

Dont you think that someone before or after hiring of a real expert dev should check whether you really need that ?

They can always fire someone for being „too good” or like you say „decreasing own standards”.

Thankfully those experts sooner or later leave those medicore companies but they are not responsible for the mess that was left - its always the management that did not create clear expectations towards hires…


It doesn't: The average startup goes bankrupt.


The average corporation doesn't go bankrupt, so you moved the goalpost from "corporation" to "startup". Ok, sure. So, in your view, a developer who leaves behind over-architected complexity to "mediocre" developers is a surefire recipe for going bankrupt... and this was supposed to be in defense of overarchitecting complex codebases? To me it sounds like going bankrupt is bad, so we as developers should strive to prevent our employers from going bankrupt, no? So if you work for a startup that you know is going to hire mediocre developers after you, surely you should strive to leave behind the kind of codebase that doesn't bankrupt the company after you leave?


> The average corporation doesn't go bankrupt, so you moved the goalpost from "corporation" to "startup".

For a lot of corporations, making the jump to software is effectively starting a new company within the company.

> To me it sounds like going bankrupt is bad, so we as developers should strive to prevent our employers from going bankrupt, no?

Bad hires tank companies.

> So if you work for a startup that you know is going to hire mediocre developers after you, surely you should strive to leave behind the kind of codebase that doesn't bankrupt the company after you leave?

That's a red flag right here. It means sell your RSU as soon as you can.


> Bad hires tank companies.

We weren't talking about bad hires, we were talking about mediocre hires. It seems like you expect all companies to higher above-average developers, even though that is mathematically impossible.


> It seems like you expect all companies to higher above-average developers, even though that is mathematically impossible.

No, but that's what I expect from the companies that will succeed.


What if the NYTimes had journalists that described the news in their specific prose or worse, poetically. Speak plainly, it’s the news, report it as plainly and accurately as possible.

Often many people like to attach mathematics and science to Software Engineering to signal elitism, but truthfully this profession is a lot closer to writing. Write clearly, first and foremost, and above all else.

$outsourced_bodyshop_ressource_0443434 needs to be able to read the news too.


I'm starting to notice a correlation between my prose and my code. Writing has not been my strength and I've quietly worked on it. Learning to organize my thoughts to best communicate with others has enabled me to organize my code for others to read. I've found this has improved the maintainability of my code. I wish I had put more effort into writing through out my schooling and early career.

I've noticed that the most successful business people I know are good at written communication. Much like programming often the better communicators write less to achieve more.


> What if the NYTimes had journalists that described the news in their specific prose or worse, poetically.

Eh, I think the news landscape would be improved if there was a news source providing it in poetic form. I wouldn't want everyone to do it that way.


Great comment. Thank you


"Let's not use inheritance, it's complicated and could confuse programmers. Better to just copy-paste code."

"Source control? I don't know, this git command line is a little bit too much. Let's just use zip files and email the source"


That is not what I am suggesting, but alas, you seemed to have proven my point. I did not write clearly enough, and behold the outcome. Now imagine if we all do this in a codebase.


Your writing was fine. HN is big enough that you can't take a single critical comment as strong evidence you were unclear. Maybe it's them, not you.


What were you suggesting?


I'm not sure if this comment was in jest or not but I've seen both of these things happen in real projects.


Inheritance is the worst method of code reuse ever invented. I virtually never use it, and whenever I do, I usually regret it later.


There is no code in existence that will look obvious to $outsourced_bodyshop_ressource_04434.


> that's easily understood by others reading it?

The problem in these discussions always comes back to "others" aren't a monolithic entity. Different people find different things to be readable vs not. Different people will even use the exact same phrases (e.g. "as concise as possible without being opaque!") but still mean different things due to having different interpretations of key words in those phrases.


IME, writing code as concise as possible (semantically and syntactically) is almost always the best route. I don't believe superfluous assignments, parenthesis, braces, visibility modifiers or comments make things more easily understood by others. In time, experience and mastery of the language will reveal superfluous code as something that does the opposite as well as much cleverness as mere utilization of the language or abstraction.


Strongly disagree. Working on codevase that was written by "short is better" people was harder and I hated it.

I don't care for your saved strokes. Make it apparent.


From a syntax perspective, I would argue the grammar of a language does make it apparent.


You're generally not supposed to think about grammar rules when you read or write a language you know (code or prose)


Your omission of a period is ironic.


He is doing exactly what he says; that isn't irony in any sense of the word. You could call it emblematic, but not ironic.


You're making my point: the period being there or not does not matter, what matters is that you were able to parse my sentence and understand it without issue, and without needing to think about its grammar, thanks to the pattern-matching abilities of your brain.


That's the key...in time.

In time doesn't help when the majority of your devs are junior which is a common case in large internal dev shops.


How will they learn to better express logic or master the language without experience?


I think this blog post is highly relevant https://daedtech.com/how-developers-stop-learning-rise-of-th...


They won’t. They’ll be mentoring other juniors and then onto management long before then.


The disclaimer at the top, where they say their experience comes from being an external consultant is key. Because when I worked in Enterprise IT, we had our own team of senior software folk, and my experience does not match this article. I'm not saying we didn't have our problems, they were just different than described.

At the end of the day, though, the point of treating internal products like products and not projects is accurate. Every good IT shop I was a part of landed at this answer, even if we got there through different experiences.


I had the same reaction. I've spent most of my career working on non-consumer-facing software (either internal enterprise, or external to enterprises or small-medium businesses). At all the places I've worked, the team that built software was the one that would go on to operate it - there was no hand-off. Then again, I've never worked in consulting, nor worked on teams that used consultants to deliver code. Many of the points the article raised are good ones, it's a shame that the title is so misleading (or at least, reflects a narrow experience).


> I have never seen SCRUM or any agile approach working in a project setup ever. I am biased, though, because a company that truly lives agile values won’t do software development in a project setup

When I first read the Agile Manifesto - around '99, I think - it seemed clear to me that this was a great leap forward in software design, but that it was clearly implied that this couldn't be used in a "fixed deadline" environment. I really wish they had actually called that out in the manifesto itself and made that more obvious.


I've always been bothered by the (sometimes massive amount of) hate that I see in the internet towards agile. The experiences from other developers always seem a bit off, like something is not right.

I think your point might be the missing piece. Agile might not mix too well with fixed timeframes or fixed budgets, but rather needs an environment of continuous development where the requirements have room to drive the project within wide enough budgetary boundaries.

This to me feels sort of a natural way to build things. If we need something we build it, otherwise we don't. And those "needs" might pop up at any time, by external (customer requests etc.) or internal (new techical requirements become apparent as the project is being developed) events.

My experiences of agile development have been from companies without deadlines and are generally positive.


> hate that I see in the internet towards agile

The reason is that enterprise implementations of "agile" are often the opposite of agile: waterfall with no frequent releases nor customer feedback, but lip service to agile practices and middle management ceremonies.

If you go through the agile manifesto and compare, these implementations violates all the points listed.


> The experiences from other developers always seem a bit off, like something is not right.

That's the point.

Agile is difficult to do exactly right. When done wrong my perception is that is produces far more dysfunction and stress at an individual level than alternative project management methodologies.

If you've heard the fitness saying, "the best workout plan is the one you stick to," my feelings about project management are pretty similar. Agile may be great, but if your organization isn't capable of sticking to its central ideas then it's just not going to work. In that situation you're better off picking a methodology that's less efficient but easier to implement (like waterfall).

There are also some fundamental challenges presented by the methodology that are genuinely difficult to deal with. These tend to be ignored and then they manifest as dysfunction elsewhere in the system.


"agile" (the manifesto) was stating the obvious for people working on a product that had a lifetime longer than a "project". It works for products that have an ongoing life.

What it doesn't work with is environments with projects and budgets that are quarterly or annually assigned. That's where abominations like "scaled agile" have arisen.

Scaled Agile (TM) and Scrum and all the rest of the ceremony are like ORMs are to SQL database, an attempt to correct the impedance mismatch between the way software teams work and the way companies work.


It's probably because Agile devolved from a system teams use to keep track of developer progress on bugs and features to some weird cult lead by managers who have 20 hours of meetings a week and can't begin to describe half of the projects they're meeting about.


How would your team motivate that it should keep its headcount during a budget cut? Without deadlines they could just say "Since nothing you do is urgent we will cut your headcount in half, other teams needs it more".


Urgency doesn't imply value. Time horizons can vary and while there tends to be a correlation of cost and time horizons (higher urgency - higher costs, lower urgency - lower costs) that doesn't mean one is more or less valuable, it just costs more.

ER doctors may provide urgent surgery or medical intervention that's life saving and usually that costs a lot more than say long term chemotherapy or say HIV management with a specialist. Both are life saving, it just turns out that one conveniently has a longer time horizon which makes it easier to juggle to reduce costs while the other requires full attention and makes it difficult to juggle clients.

You would hope management doing budget cuts would adjust budget cuts based on value provided. This is arguably difficult to quantify in many cases but it shouldn't be quantified by how many, likely artificial, deadlines a group has and how busy they look. That's just silly.


Could you expand on this? A fixed deadline can mean fixed budget, which the majority software projects fall under.


Agile requires just in time requirements and priority setting, coupled with the ability to make small changes and iterate.

If you have a "this feature needs to be delivered by X date" type of corporate culture, then you have to make a commitment, and because dates are almost always tight, you need to be as efficient as possible to achieve that goal.

So basically you have the software teams wanting to work in two week sprints, and you have execs needing new thing for contract launch in X weeks, and guess who wins? It's not the software developer, so the agile becomes toothless, because you're not learning, or iterating. You're just pushing against a gantt chart the whole way a sprint at a time, and might not even be able to release beta versions and get feedback, because that takes up valuable time.

I'm not as cynical as OP, but there is a really big push/pull that happens, and it can turn into a toxic environment if forces too far outside the product/development cycle dictate priorities (like sales, or execs).


I don't agree that Agile requires JIT requirements but benefits from and thus promotes them as the best way to deliver requirements for a feature.

Under waterfall, a document is written by an analyst that describes a problem to be fixed. Six months later, the task is picked up but legal regulations or market conditions or even the rest of the software has changed. However, the Waterfall requirements are the requirements and that's what gets implemented. Best-case scenario is that the business analyst that wrote the original spec is still employed and has been updating the requirements as conditions change.

What we've done at successful agile shops I've been a part of is quarterly planning that collects at a high-level the current requests/needs of the business, prioritizes them, they get a t-shirt size to determine if all of the requests/needs can be met during the quarter and they fall off by priority until you're left with a manageable high-level plan of work for the quarter. Then those high-level requests are decomposed into epics and stories which are specced out and estimated, etc.

It works very well but it requires that people work collaboratively from the executives and business stake-holders to the technical leadership and individual engineers.


A: "We are agile here"

B: "When was the last time you met with a customer?"

A: "Never"

"Customer collaboration over contract negotiation" used to mean something.


I mean, why not? Are there any other methodologies that reliably deliver complete, working software on a "fixed deadline" basis?


This post largely glosses over the business motivation of these efforts.

Businesses want to reduce risk. That means reducing the probability of expensive surprises.

That means they'd rather spend 10% more on inefficient code running in production than to risk their 10x developer quitting and nobody else being able to understand the codebase, costing the company a lot more.

A lot of developers and engineers just want to have their space to tinker, to grow their skills and their mind. As a bonus, under this arrangement they'd get paid for it!

That's not how the real world works. Work follows the money, not the passion, and that's why work generally sucks.

E.g.: Photographers don't get to spend all day doing art photography, they largely have to do wedding photos to pay the bills.

If you don't want your hobby to suck, don't do it as a job (or accept that work sucks and do stuff on the side for your own enjoyment if you have the energy for it).

My best advice for any engineer is to take active interest in the business' needs and wants and consider those to be on a higher pedestal than implementation details.


what doesn't kill us makes us stronger.

> Personally, I have never seen SCRUM or any agile approach working in a project setup ever.

they work great. Just their goal isn't successful delivery of the project (on that aspect they fail spectacularly). The goal of SCRUM/Agile/Lean - ie. what they are designed for - is extremely low latency and high observability by the management (and thus the management just loves it, total micromanagement under the guise of team freedom). That all comes naturally at a great cost of throughput. I.e. the "watched pot" situation. The project direction is changed very fast, there is a lot of activity, the bees are overly busy, the management always know and able to report the current progress state, while the project is hardly really moving toward the actually successful state.


"When Hiro learned how to do this, way back fifteen years ago, a hacker could sit down and write an entire piece of software by himself. Now, that's no longer possible. Software comes out of factories, and hackers are, to a greater or lesser extent, assembly-line workers. Worse yet, they may become managers who never get to write any code themselves." Neal Stephenson, Snow Crash, 1992


When I started reading Snow Crash and he described basically Uber/Doordash etc. in the first 10 pages I nearly fell off my chair. Insane how prescient Stephenson is.


I really, really hope Stephenson's Fall in Hell isn't prescient when it comes to the rise of Ameristan. That book plus Palahniuk's Adjustment Day are a terrifying glimpse of our boring dystopia morphing into complete dystopia.


>If developers are prohibited from writing native SQL, they will be limited by HQL, writing slower queries or needing multiple queries from their application for one task.

We use a similar framework, but we create custom views when we need performant retrieval, or the hydration of a big object graph is being done and not needed (like a grid).

>If the exact language, framework, libraries are prescribed in every detail, developers sometimes need to bend these tools to solve their requirements instead of using the right tool for their problem. If the layered architecture is to be zealously followed, 50% of your code will be the mappers between the layers.

We use automapper, which does exactly what it says. We manually map one-offs. In fact, that's the basic philosophy of this design. Have the framework build everything from the entities, then one-off what you need.

Our design is generic so every table has an entity, then we use a generic service layer with your normal CRUD + search functions, then our controllers are auto-generated using the same thing. We do custom work only for one-off stuff that either is more complex than CRUD, or requires higher performance. It's cut our development time significantly, since for normal crud work, it's auto-generated based on the entity itself. You create the entity and dto's and the repository, service layer and controller are all auto-constructed using generic code. If you need something special, you create a custom controller/service. We tend to leave the repositories generic. Note this is just for the WebAPIs, front end is a different monster.


This covers some points that were discussed on Hacker News in response to my essay "Why are large companies so difficult to rescue (regarding internal technology)".

If this interests you, then this old conversation might also interest you:

https://news.ycombinator.com/item?id=20260114


You'll never see a purely Agile or Lean product from an Enterprise, because those don't have a clearly defined set of expenses, profits, and timelines.

One terrible thing about Enterprises is the way their finance team leads product decisions. In order to maximize their profit, they announce a product will be ready by X date, and estimate the cost leading up to it. If you don't hit those numbers, it affects a lot of other numbers, especially if you're publicly traded. And that's before the wild promises Sales makes. They are preternaturally addicted to arbitrary deliverables.

Essentially, their products are projects, and the customers are incidental to the whole thing. There is no estimate for customer happiness in the business case for a new product team. If you build it on time and under budget, everything else (getting a customer to use it and like it) is taken for granted.


Would you mind telling me at what kind of Enterprise you had that experience with finance?

I led a product finance team at a SaaS company (ServiceNow, Atlassian, Okta, etc. tier) and all of our models and analyses are product driven including a/b testing and surveys with customers.


Largely Enterprises that had too much profit to care, or were controlled by a parent org with a tight leash on expense with no regard to product. Old-school businesses that had barely begun digital transformation. As long as they got their 2% growth YOY, there was no interest in closely tracking anything but the core BI metrics for growth and expense.

I would add that a/b and surveys are not good enough to identify customer pain and solve the problems they most want solved. There's a raft of feedback mechanisms that most Enterprises ignore because their products are so complicated that nobody wants to sit with the users and find new methods for continuous improvement. Agile/Lean/DevOps/SRE constantly emphasize quality and immediate halting of product work until bugs are fixed, yet no Enterprise I have ever heard of does this (even for reliability - one of the core metrics of any online product!)


A lot of software development paradigms seem to have developed from requirements of Enterprise Software.

Sure, you'd have a Caramack here and there, but if left to their own devices, most software would end up being spaghetti code. Especially when it involves a steady stream of changes over time.


> The more standardized the environment for a software developer is, the more under-engineered their code will be.

Ok, so there is a tradeoff. It does not mean that the Enterprise approach is wrong in general. Maybe that standardization is worth it? How could one quantify that?


Sounds like you didn't have a very good try at presenting your alt. architecture to the enterprise architecture team.

As an architect I don't mind alternative approaches and variances from a stated architecture if they can be justified.


The issue is that the team/person doing the implementation has to "justify" their architecture to someone who won't be themselves working on the implementation. Or just in general separating "architecture" and "implementation" into separate tasks done by separate teams. The people in the best position to weigh the relevant tradeoffs and design the best architecture are the people who have the most domain knowledge of the problem being solved and who will ultimately be on the hook for the system when it is production.


this is written like its bad to "hand over code" to others. I don't like it.


Reading code is harder than writing code, right? And handing code over to some one else is inherently handing it over to some one who does not have the experience or knowledge you gained writing it. If you want that person to be successful then there is pressure to make that code simple. The author thinks that pressure causes the code to also be poorly or naively engineered. I don't think the idea is that handing over code is bad, but that a team should maintain responsibility for a product over time (because they can take advantage of their domain knowledge) instead of handing it over (like it was a finished product, when it isnt).


While it's not as drastic a handoff, handing code from your present self to your future self is also important. I find myself thanking my past self out loud and by name at times.


It is also the most empathetic thing you could do. Try to imagine what it would be like if someone handed over their ‘brilliant’ code to you.


I think thats part of the authors point, you can't really have brilliant (or even just clever) code if its expected to be handed off to another team.


Handing over code comes with some very real costs, I think it would be naive to not consider these costs.


That's the main reason for DevOps, as I understand it: one team develops and operates the product. And in the case of DevSecOps, also secures it, rather than a separate team doing so. Some individuals will specialize, but a cross-trained core team ties both (or all three) of these functions together throughout the product's life cycle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: