Learning that some folks can produce so much value with crappy code.
I've seen entire teams burn so much money by overcomplicating projects. Bikesheding about how to implement DDD, Hexagonal Architecture, design patterns, complex queues that would maybe one day be required if the company scaled 1000x, unnecessary eventual consistency that required so much machinery and man hours to keep data integrity under control. Some of these projects were so late in their deadlines that had to be cancelled.
And then I've seen one man projects copy pasting spaghetti code around like there's no tomorrow that had a working system within 1/10th of the budget.
Now I admire those who can just produce value without worrying too much about what's under the hood. Very important mindset for most startups. And a very humbling realization.
I had a colleague who was old school and loved optimising, everything he reviewed come back with tiny changes that would save fractions of a ms. His nemesis was a guy who was arguably the best coder I have ever worked with. Things come to a head in one meeting and old school said if we did things his way our code would run quicker and the response was legendary 'If we coded like you it'd run quicker because most of the functionality would be still in the backlog.' I still think about that when it comes to optimisation.
The problem with optimisation is that you first need to decide what you’re trying to optimise for. Latency? Throughput? Overhead? Correctness? Time to market? Ease of ramping up new hires?
Optimising for simplicity first is almost always the right thing to do. Even if it turns out to be too slow you then have a reference implementation to confirm correctness against.
In my experience it's rare to correctly identify the bottleneck in code while writing it.
Some view simplicity more as minimizing lines of code, e.g., less moving parts.
I view simplicity more as increasing lines of code, the goal being to make the code very verbose. Sometimes this means more moving parts, but smaller "movement" at each step.
> In my experience it's rare to correctly identify the bottleneck in code while writing it.
Why? It is so easy, just think of the work being done and pick the big parts, those are the bottlenecks.
Only reason I can see anyone fail at that is that they don't know how long things take in their programming language, but that takes very little time to learn, and once learned you will know the bottlenecks before writing.
In so many cases the bottleneck is using bad data structures everywhere, that often gets you 100x runtime and doesn't show up in a profiler because it is spread out all over the codebase, that is the real bottleneck that never gets fixed and is why programs are slow today. To fix that you have to learn to know the bottlenecks as you write the program and not rely on a profiler. Profilers helps you find how long things take, they are really bad at helping you understand how to make the program fast.
The steps in this witty quote helps puts things in perspective as what anyone should do first when in doubt: "Make it work, make it correct, make it fast".
But sometimes you already have something that works, is correct and fast, but you still want to simplify: for example, when understanding _why_ that code is correct is too annoyingly complicated to explain and understand.
With AMD having 128 core CPUs with a 192 core coming soon... Depending on what you're doing, and how you're doing, there's a LOT of raw horsepower you can throw at a problem. For example, a classic RDBMS (PostgreSQL, MS-SQL, etc) over more complex No/New SQL solutions.
It’s factually correct due to hardware and compilers (néé transpilers) offering so much headroom, but part of me cries when you compare modern hardware utilization compared to how optimized-to-the-gills late generation PS2 games were.
I would argue that the fast part isn't optional for web apps (where "transpiling"), but it can be in many other instances.
I write a lot of data-pipeline code in Python that mostly runs once a day. Does it matter if it takes an hour to run instead of five minutes? Not really, as long as it's ready to go for the next run.
That's just a rude thing to say. If you all coded like him you wouldn't be having the discussion.
The issue is when you have people who do not code with efficiency in mind, and someone who does think about those things reviews the code.
Most efficiency gains are probably marginal and not that impactful. So you're probably OK ignoring it. And it's true that bringing such things up during code review and then going back and changing it will take more time.
But if people wrote the code with efficiency in mind to begin with they likely wouldn't be spending much more (if any) time while writing the code. Just have to use their brains a lil.
And then you get an in-memory SQL database that is used for cached settings with a text-based SQL query to retrieve every configuration setting (thousands of times during a login) and have a login that takes many seconds to run.
Literal example... replaced with a lock-free hashmap, and reduced the blink of an eye in terms of time.
I don't see the problem here: the original dev delivers quickly, the next dev gets to solve an interesting problems and gets to show off how much time they saved.
Delivering the over-complicated feature and improving the performance of the over-complicated feature both seem to get more praise than shipping the simple version to begin with.
With none of the licensure, mandatory education, and so forth. But the world needs programmers and technicians for most of the work we have to do. I do comparably little "engineering" and the little I have done that qualifies for such a statement I recall. Majority of the work is programming and technician work. Nothing wrong with that.
But other branches of engineering are also about doing a lot of repetitive work, exactly what "programmers" do.
Engineers shouldn't compromise on things like safety, security, legislation, budget, properly documenting, properly communicating with others, etc. Same as anyone working professionally with code should, no matter the title.
But there's nothing that says that engineers should only be doing groundbreaking or interesting work.
The "artist" analogy works much better.
EDIT: Perhaps the main difference is what psychoslave mentions below – engineers are expected to question the relevance and necessity of requirements, and work together with business, rather than just doing as asked.
Maybe not in the U.S. but in Europe, Software Engineering is often a field of engineering where one can get licensed.
i.e. in Austria many IT techs and devs are called (at the doctor's office or in formal settings) "Mr. Engineer" ("Herr Ingenieur") if they fulfill some formal criteria and get licensed. A further qualification is becoming a federally certified civil technician for IT (Ziviltechniker or Ingenieurskonsulent) - also something that public sector contracts sometimes mandate.
Software around the world is engineered - it just often isn't in California.
Here's a story for you: I was born and raised in Switzerland, but I live in Germany now. When I first moved here, I was working as a freelancer and needed to register as a sole entrepreneur (Einzelunternehmen, one of the legal entity forms here).
Now, there are several types of taxes you pay in Germany based on your income. One of them is the income tax, the other is called Gewerbesteuer (maybe best translated as "commmercial tax) - and the Gewerbesteuer can be significant (something like 12% of your income). Many freelancers are exempt from the Gewerbesteuer, and so are software engineers. However, I learned that you only qualify as a software engineer if you have actually studied software engineering or a closely related field. I had studied business administration. I tried to explain to them that I have been working as a software developer for over 10 years, literally doing the same job as somebody who had studied software engineering - to no avail. They woulnd't exempt me from the Gewerbesteuer.
I'm not claiming that I'm an "engineer" or that my computer science fundamentals are as good as somebody who studied computer science, but the tax exemption shouldn't be on some bureaucratic difference like what you studied a decade ago, rather on what kind of job you're doing.
Bizarrely enough, I always had "Software Engineer" written on all my documents here in Germany: my old Residence Permit, tax documents, contracts, the speedy authorisation from Agentur für Arbeit when I moved here, etc. As far as BRD is concerned, I am a software engineer.
But I never really studied software engineering or anything related, only Electric Engineering.
But nobody in the government knows that. I don't have my diploma papers with me and have no time or interest in going to my country of origin and ask my university for it.
It doesn't matter so much because with a degree in electric engineering you are allowed to call yourself an Engineer/Ingenieur, and that is the only thing that matters. It is a Ordnungswidrigkeit (administrative offence) if you call yourself an engineer if you don't have a at least 3 year degree from a university or university of applied sciences.
With a degree in computer science you are also allowed to call yourself an engineer in Germany, for anyone reading this and wondering.
Yeah, Germany is kind of a bureaucratic hellhole. And from what I know France is even worse.
As for formal education - I've seen dozens of people with an engineering diploma who barely can write a FizzBuzz, so it is not the best indicator and mostly people don't look at it when it comes to recruitment aside from big corps. Although that kinda changed lately as junior positions started evaporating and without experience the diploma is the main thing they check.
I live in a country where software engineers often have engineering degrees.
What that means in practice is that they study math and physics for the first 2-3 years of their degree, instead of computer science or software engineering.
Does that make them build better systems than Californian devs? Based on company revenues and salaries, I’d say no.
> Does that make them build better systems than Californian devs? Based on company revenues and salaries, I’d say no.
Majority of silicon valley devs are immigrants though, they are good because they are sourced and filtered from all over the world not thanks to American education.
Yet almost every major piece of software or idea started in California and almost nothing noteworthy in the space started in Austria. I think that tells us something about the real value of a piece of paper granting you the right to call yourself an "engineer".
I'm sure there are many things which make this such an unfair comparison that it's not worth making. If there was a firehose of money spraying all over Austria as a result of being the worlds reserve currency I'm sure the shoe would be on the other foot.
What is the mechanism by which being the reserve currency sprays a firehose of money into the US?
Keep in mind that being the major economy also carries costs - the US spends many times as much as other countries on its military and has no realistic path towards spending less.
And either way, their economy has been wildly succesful since the 1800’s.
This "gatekeeping organization" is the federal government. They have you pass an exam to prove that you know about public procurement law and how to plan and oversee a software engineering project before they let you work on public sector contracts, where you have to responsibly handle tax payer funds.
For something like this to work there needs to be a strong and technically competent government that the tax payers trust. This is presumably why it works in Austria and not the USA.
You can enter the workforce as a developer here without being licensed or credentialed at all, it's absolutely no problem (legally speaking) - it's just an obvious sign of a commitment to engineering responsibility and quality.
"Software around the world is engineered - it just often isn't in California."
Can you explain the difference between 'engineered' software, and the rest? I mean other than being created by someone with an engineering certificate or whatever.
But, for me it implies that it gets Planned and Worked on formally like other engineering projects, Lists of Requierements, Detailed Propossals, Design documents, BOM's ,Gantt Charts, the whole shebang
The alternative being the ultra lean SV style, where a project is lucky if even the public facing documentation is not at least N-months out of date
He defines the Engineering Method as: 'Solving problems using rules of thumb that cause the best change in a poorly understood situation using available resources.'
In that sense a Software Engineer would not necessary need to know all the complexities of the complete system to make something work as specified. And I would equate a 'Programmer' more like the Scientist or Mathematician in this case.
Funny how there are so many different ways to look at a title.
I've come to understand the difference between programmer and software engineer, but what's the difference between (software) developer and programmer?
Maybe it's only in my head, but for me, programmers work 99% of the time with code, while developers also spin up servers, cicd pipelines, domain dns stuff, and more stuff that are not strictly source code.
I call myself an engineer- but the way I understand it is that I still have to know when to use a hammer like simply copy paste code and not some complex solution.
The problem in software is that very frequently you dont have a real hammer. You have hundred amalgamations of swiss army knives that do everything. Like what is the hammer when you want to add some nice dynamic content to a website? Is it just javascript or maybe react?, Angular?, Svelte? Or another example: you want to code a command line app, what language is a hammer here?: C++?, Go?, Bash?, Python?
This analogy falls apart the moment you apply it to anything in software.
Well, the engineer first move should be "do we really need this dynamic to achieve the global goal, or can we do without"?
Most of the time however stakeholders will prefer that someone unleash the last technical shiny tech stack (that no one master in the team and that is about to be dethroned by the even newest thing) to deliver a pixel perfect reproduction of the mock-up rather than question the relevance of the proposal.
I think I might actually like to do more actual engineering, using models and sound notations and planning, thinking about eventualities, not being told not to by management guys, trying to get it right the first time, instead of always being told "later", because otherwise people will suffer (or die) and all that.
I would perhaps even like there to be a license, because then we would maybe get way fewer people interfering with a technically sound decision process. As it is right now, every manager dude thinks they know better what to build than the actual programmers/engineers, even if the programmers/engineers work on the product for years already and have gathered experiences the frequently rotated managers lack. It is always soooo convenient for the manager types to defer good enginnering endlessly, so that ultimately it never happens and shitty products are made.
YES! Couldn't agree more. Although my title has always been Software Engineer, I never introduce myself that way (I prefer Software Developer or just Programmer, or I say "I write code at XYZ").
I went to an engineering school. Actual engineers design bridges and machines and biochemical whatnot, and of course require licensing. I push code around.
I even try to subvert company culture and say things like "Well developers will need to update that API" instead of "engineers".
It's the same in the creative industry, no? Teens fall into it practicing the techniques of artistic self-expression wanting to make a career out of showing the world themselves with expressive finesse, but the alienable value in these skills lie in using them to create a consistent brand identity etc.
It’s usually more than 80% of the work. Painting something is really fast with modern sprayers but sanding off the old layer is still a lot of elbow grease, even when you have sand blasters.
A Lot like development then, there is a lot of work before you write your first line of code, however like the original post you can do a simple sand down, no undercoat and have something that lasts a dozen years if you keep on top of it.
I don't remember where I saw this quote, but... "It's okay to half-ass something, when all you need is half an ass".
EDIT: Totally agree about the 'important mindest for startup', I had a similar eye-opening experience, working in a startup with 'cowboy' code that was actually quite good, but I had to unlearn a bit of stuff I read as a junior/mid-level developper.
It was code that was well-architected, had well-concieved data structure, and had business value, but every "professional" coder would deem bad-quality, since it had no unit test, did some stuff unconventionally, and probably would have many linter warning.
Most scenarios only warrant fractional assing. Also this philosophy provides the opportunity to say “ok guys, this time… we have to whole ass this one,” which is always fun.
Well-architected + well designed means you can go back later and fix the code if the project survives. I'm hitting this personally right now - the amount of code I can write is dwarfed by the amount of features needed to get something like a working demo out the door - I can spend time and make every thing perfect, or I can squint and imagine how I will implement fixes later and focus on making sure that I don't have to rearchitect.
Ask yourself: Will the amount of work it takes to "do this right" increase over time? (And by how much?)
I drew some quick placeholder art for a game I'm working on for fun. One day I might sit down and draw some good art, but not today. When the day finally comes to finish the art, the difficulty of creating that art will be no greater than it is now. In fact, with improving art tools, it may be easier to do later.
On the other hand, if I'm not quite happy with my database scheme and I want to change it, I can spend a day on it now, or I can spend--based on my experience--months on it in the future. In fact, there's a good chance if I don't fix the database schema now, it will never be done.
But aren't unit tests the half-assed solution? They are a wonderful help if you want to deliver something fast, but if you had more time you'd probably do something else.
It doesn't get said enough that unit-tests are not the ultimate in good code. There's a lot of shitty code with unit-tests, and there's a lot of really good code with no tests.
That's true, but a codebase without unit tests is apt to be deemed poor quality by professionals due to the time factor. Being able to release fast is an important quality of business software. Professionals don't look for perfect artistry, they want to see painter's tape in the right places to get the job done efficiently.
This is a false dichotomy. On one end, you have "overarchitects everything so much that the code is soon unmaintainable" and on the other end you have "architects the code so little that the code is soon unmaintainable".
Always write the simplest thing you can, but no simpler. Finding that line is where all the art is.
It's cliche, but I really do feel reading the Art of Unix Programming gave me a very good sense early on for how to walk this line carefully. Unix programs are high quality - but they're also, ideally, small, and written with an eye to compositionality with the rest of the ecosystem. The best architecture is in the middle, and often the best first draft is a simple series of pipes.
I'm not the person you're replying to, but here's my take: even though the two look conceptually similar, Unix programs are just a lot simpler. All programs run on the same machine, they read their input, execute, produce output which is piped to the next program, and terminate. Want to change something? Change one of the programs (or your command line), run the command again, that's it.
Microservices are a lot more complicated. You'll need to manage images for each of the services, a fleet of servers on which the services will be deployed, an API for services to communicate together, etc. In many (most?) cases, a monolith architecture will be a lot simpler and work just fine. Once you reach the scale at which you'd actually benefit from a microservice architecture (most companies won't ever reach this scale), you can start hiring devops and other specialists to deal with the additional complexity.
What actually gets hate, I think, is not microservices themselves, but the fact that microservices are often used in contexts where they are completely unnecessary, purely because of big tech cargo culting.
We only think of Unix programs as simple because we have many more abstractions nowadays. But you should compare a Unix program with DOS programs (probably CP/M also but I never wrote those myself) at the time. Poking directly at the hardware, using segmented memory, dealing with interrupts. The idea that a program should be well behaved, should accept things an inputs, should push outputs, and should have a virtual address space are actually huge abstractions over what could just be a block of code run on a spare OS. I'm not saying that microservices are better than monoliths, just that Unix programs aren't as simple as we think they are in a world where we're managing servers like cattle and not like pets.
I'm pretty sure that most of the GNU POSIX command line tools were written before virtual address space (VAS) was common. And, as I understand, VAS is hidden from the programmer behind magical malloc().
That's a great question. Some might say it's because of the network - that makes microservices messy and so on. But I dont think so, from what I remember plan9 (the os, successor of unix), Rob Pike wanted to make it so that there is no difference between an object being on the network or outside the network. In unix philosophy, things have the same interface, it's easy to communicate. For microservices it would be REST api which is unique to network things. I honestly see a direct link between these ideas. Unix here is projecting a much nicer, simpler image but nonetheless they seem to overlap a lot. The result in both cases seems to be a hard to debug network of small utilities working together. The saving grace for unix is that you are mostly using stable tools (like ls, cat), everything is on your system so - you don't get to experience the pain of debugging 5 different half-working tools.
Microservices provide encapsulation and an API interface, but are not composable the way Unix programs are when e.g called by a Unix shell on a Unix OS.
Either microservice A calls into microservice B or there's a glue service calling into both A and B. Either way there's some deep API knowledge and serious development needed.
Compare with a (admittedly trivial, but just doing that is orders of magnitude less complex than web APIs) `ls -1 | grep foo | sed s/foo/bar/g`, exit codes, self-served (--help&al.) or pluggable (man) doc, and other "things are file-ish" aspects, signals (however annoying or broken they can be) and whatnot. There's a whole largely consistent operating system, not just in the purely software sense but in the "how to operate the thing" sense, that enables composition. The closest thing in http land would be REST, maybe, and even that is not quite there.
>> Microservices provide encapsulation and an API interface, but are not composable the way Unix programs are when e.g called by a Unix shell on a Unix OS.
Because the Unix programs all use pipes as their interface. When you simplify and
standardize the "API" composition becomes easy. Microservices are more like functions or modules each running as separate processes - if you use the same language for the services and glue you can just compile them all together and call it a program right?
Unix utilities are stand alone and used as needed for specific tasks. They hardly change much, have no data persistence, usually no config persistence other than runtime params, and don't know about or call each other directly.
Microservices are moving parts in a complex evolving machine with a higher purpose that can and do affect each other.
The problem is that they are microservices in name only. Where a unix utility does a few hundred or a thousand lines of C code for its entirety, a microservice will depend on a complex software stack with a web server, an interpreter, an interface with a database, and so on.
It's easy to forget this complexity, but it comes at a cost in terms of performance and, above all, technical debt. The microservice will probably have to be rewritten in 5 years' time because the software stack will be obsolete. Whereas some unix utilities are over 40 years old.
In theory, only the network boundary. Which allows you to independently scale different parts of the system
In practice, a way of splitting work up between teams. Also it makes it easier to figure out who to blame if something breaks. Also a convenient unit for managerial politics
So because manager X "owns" microservice Y, it's going to stay around so that they have something to manage. Over time the architecture mirrors the organization
If somebody created a complex system of one hundred small unix utilities that all talk to each other over pipes I am sure it would get abd deserve a lot of hate. Unix utilities are nice to do very small, simple things, but there is a limit.
> Always write the simplest thing you can, but no simpler.
I don't think so. Time and time again the client will insist on stuff like "the customer only needs a single email address/phone number" but you're going to pay for that one later if you do the simple thing and add an "email" column.
Same for addresses.
And a whole bunch of other stuff...you need to normalize the heck out of your DB early on, even if you don't need it now. The code you can mostly make a mess of, just choose a good tech stack.
Go down the simple path to start, and refactor to a more complex solution when it makes sense to do so. If experience tells you the client is definitely going to ask for it later, add a “break condition” that tells you when you need to upgrade. You can put entry points into the code - comments, interfaces - to make it easier to do the upgrade.
In many many cases this doesn't work and it crashes and burns the whole project/startup company when its necessary.
Sometimes messing up your fundamental architecture means that you hit a refactoring your company won't survive (while your competition grabs all the customers who wanted that feature your architecture doesn't allow).
This is where experienced lead engineers earn their worth - they understand which parts cannot be fudged and fixed later and need to be there from the get go.
Normalise the DB from the get go (doesn't really require much effort), then charge for the fact that "actually we have a customer who has 2 email addresses".
That's not a "false" dichotomy, that's an actual dichotomy: it's a real thing. Reading your comment, even tho you don't say it, I get the feeling you'd be with the rallying cry, "Bikeshedders Assemble!" hahaha! :)
Oh, I see. That's a good point. But I don't think hu3's comment was suggesting there's only two options, just illustrating some possible margins to describe the landscape.
Maybe stavros' was hallucinating that strawman reduction in there, is what I think. Like you don't have to say it's a false dichotomy unless that's the only way you read it. The existence of something between the margins, should be obvious. Anyway, haha! :)
Yes, the compounding effects of previous architectural decisions, but not if you take a balanced path, guided by awareness of the two extremes. So it needn't. Hahaha! :)
unmaintainable is a weird word used far too much. There are genuine ways it can become unmaintainable of course but the main use I have seen people use it for is "I don't want to understand it and would rather rewrite it"
There's a discrepancy that the top comments here say that "some people can produce a ton of value by not caring about code quality" and "I didn't care about code quality and got bit later on". And people discuss that incurring technical debt with bad code can sometimes be worth it and sometimes not be worth it.
The logical implication of technical debt having an interest rate (i.e. It costs more to fix something later than now) is that like money, features have time value. This is what makes it worth it to incur technical debt.
Thinking about the implied interest rate of technical debt makes it easier to rank what should be prioritized for a fix. High interest debt should be paid off first. But you might also focus on refactors that reduce the interest rate, such as by modularizing your code so it's easier to replace down the line.
Btw, if you like the financial metaphor, then technical debt is a bit of a misnomer. 'Technical debt' behaves a lot more like equity you sold than like debt you issued.
In the sense that the 'technical debt' is only something to worry about in the future, if your project goes anywhere. But if no-one ever uses your product, you don't need to fix the 'technical debt'.
Thank you! I like the metaphors because they allow me to think about related variables. What metrics would I use to compare different technical equity options? Is there an implied valuation of my project based on the % of developer time I have to spend fixing an issue related to the feature value?
I learned this lesson when working on very fast paced projects with way fewer developer resources than needed. to ship a feature on time, shortcuts in quality have to be made, but you learn to make the shortcuts in a way that are easier to go back and clean up later.
My manager gave me gruff the other day for adding a TODO that I had no intention of ever doing. TODOs are like a get out of jail free card. Don't want to do something the reviewer is likely going to call out? Just add a TODO.
I still find them helpful when the change is outside the cope of the current task. I especially like the ones that include both a reference to a Jira ticket (or similar), and an explanation of why, or any gotchas:
# TODO: (CAFE-1256) Refactor steamed-liquid stuff to support non-dairy options
# We don't currently support non-dairy coffee additives, but once we do
# we will need to ensure that the steamed-liquid transport
# also alerts when users have a dairy allergy.
This is a lot of stuff to add for some things, so we might not want it everywhere, but it helps explain to a reader why we don't currently have it built (product doesn't need X yet), but also has hooks to existing (planned/canceled) tickets and is easily greppable (e.g. "non-dairy" or ticket name).
I've declared quality bankruptcy. Decisions are now driven by user needs. Did I half-ass that feature? Yes. Is anyone actually using it despite crying it's essential? No. Then it's not getting cleaned up. Are they not using it because it's shoddy? I guess we'll never know.
My way of planning and writing software shifted a good bit after I went from working at a mid-size tech company to working for myself. Suddenly I cared very much about how long it took to reach a functional product and very little about developing sophisticated architectures.
Simple UX too. Early adopters are so much more forgiving of a boring but functional user interface than we want to admit. It doesn't have to look amazing out of the gate. It just needs to do something cool.
Plus a lot of our overcomplicated architectures on the frontend are because we shoot for a UI that competes with products that have been iterated on for 15 years now by teams of dozens or even hundreds.
I can relate. I am now on a small team of half a dozen (2 front, two back, 2 devops). We've been smashing it for the last year, replaced a hodgepodge of systems with a configurable framework and a modular integration layer with third-party systems. As a reward our work the management are bringing in a consultancy we are supposed to train to replace us. That will be a very expensive and inefficient exercise. How do I know? Our small team was created to deliver what the previous consultancy could not do in 5+ years. That experience taught me once again that management in large organisations have no idea what they want to build, how to build it, how to maintain it, and that's where large consultancies come in, milk the client for all they can and leave a massive mess behind them.
The solution sounds simple: Refuse to train the incoming consultants. You can do this indirectly by quitting before the consultants arrive, or, directly, by telling mgmt that you refuse to train your replacements. Your experience sounds great to sell yourself into a new role with better managers.
I think one of the most valuable lessons I have learned in software engineering is that you can write entire projects with the express plan of rewriting them if they actually gain traction. If I want to prototype something these days, I will often write code that, while not quite spaghetti, would definitely not pass a proper code review. It's actually kind of fun. Almost like a cheat day on a diet or something.
Unfortunately, that rewrite step often doesn't happen. I can't count the number of times a prototype that was meant to be thrown away was actually put into production because "it's cheaper and faster than rewriting."
>I can't count the number of times a prototype that was meant to be thrown away was actually put into production because "it's cheaper and faster than rewriting."
Did the business make money, though? I think that's the law of the jungle
I’m working on a prototype now but I deliberately made it run entirely in the browser (indexeddb) to avoid the problem that I might be asked to put it in production!
I think the other replies miss an important part of your comment:
Some of these projects were so late in their deadlines that had to be canceled
Speed is really important a lot more often than devs like to acknowledge when a company is small and fighting for its life to get revenue, let alone become profitable, and the code debt is often worth it. Fixing the code debt itself doesn't need to be any more thorough than necessary either.
This is something that I like about the metaphor of 'code debt', and which tends to go over peoples head: debt can be a perfectly fine instrument, and just like a real debt, it can be a good thing when leveraged wisely. The issue is more when debt is treated as 'free money', and is used carelessly.
At the same time, companies often fail to invest into long-term goals, like maintainability, increasing test coverage, or even bettering the internal UI, even when it’s their core, business-critical product.
And the other part is just the sheer amount of projects that can't deliver that "speed" after a year or two because shoddy, quick and poor decisions were made around code quality. Once you find out that your startup chose the wrong architecture because "we need to do it fast" and it needs to stop pivoting for 6 months to unfsck themselves, it's mostly too late.
Unfortunately people who create those monstrosities hide behind the same "keep it simple and quick" excuse than people who know how to prioritize.
It's interesting how many people here ignore that scenario - it's surprisingly common. Is it because most of developers jump ship at that point?
I have to kind of ask, maybe it's the design comitee that instead produces the sphagetti in that case? They are trying to commit themselves into decisions without much knowledge about the problem they are solving. I understand this is the reality of business and so on but let's not imply that this is good software design. The basic operation of abstraction (as this vague magical thing which creates concepts) is something that has to have something to abstract from. When we do abstraction in a bubble, not informed by the problem - like the design comitee does. What we get is abstractions made from abstractions: foundation less non-sense. Maybe it's this spagetti monster, the rogue coder, the guy that actually tries to solve the problem - the real designer.
Exactly this - I've a colleague who just fell into "solution mode" and started whiteboarding a LLM framework he could fine-tune with RAG and some 3rd party vector database. It's six months later and there's still nothing working or even a deliverable schedule.
Compare this to another colleague, with an almost exactly similar use case, who just downloaded an open-source LLM, wrote some glue code and set it loose in production. It's not pretty but it (mostly) gets the job done.
As the old addage goes "Perfect is the enemy of good".
I am one of those people who wrote a production system controlling a million-dollar business fifteen years ago that is now controlling almost a billion-dollar production system today. The auditors are the main reason it is being changed, "single man risk" and "Obsolete software" (Delphi/MySQL) being the two highlights from the Deloits IT team.
The code is not brilliant, it is full of technical debt (as thats what happens when companies grow and change) but is still perfectly maintainable.
I have been watching the herculean effort of three different teams to replace this system, carving off the low hanging fruit to NetSuite and React/Java applications while repeatedly delaying the core system while they agree on standards to use.
I stick around as I am quite close to retirement now and the employees and the company are good to work with and don't deserve me to abandon them. At this point I genuinely want the projects to succeed so I can move on as there are many small companies still out there that I can make a difference in even with my age, single man risk and obsolete software.
Having heard a variation of this comment many times, I keep waiting for an “aha” moment, where I see the light and abandon my obsession with minimalism and clean code.
But at least in science roles it hasn’t happened yet. Rather, I keep seeing instances of bogus scientific conclusions which waste money for years before they are corrected.
Being systematic, reproducible, and thorough is difficult, but it’s the only way to do science.
Literally the only thing I tend to worry about up front is deployment automation. I've worked in so many environments that don't have it, or have some byzantine manual deployment strategy that just gets irksome and difficult. I'm a big fan of containers, even for single-system deployments. If only because it always bites you when you are under the greatest time pressure otherwise.
Beyond that, my focus is on a solution to the problem/need at hand, and less about optimizations. You can avoid silly stuff, and keep things very simple from the start. Make stuff that's easy to replace, and you often won't ever need to do so.
Most software isn't about science, and isn't engineering... it's about solving real world problems, or creating digital/virtual solutions to what would otherwise be manual and labor-costly processes. You can shape a classic rdbms into many uses, it's less than ideal, but easy enough to understand. Very few jobs are concentrated on maximizing performance, or minimizing memory or compute overhead. Most development is line of business dev that gets deployed to massively overpowered machines or VMs.
But that's just the point, for most problems most people have, you don't have to be scientific. If your invariants vary and it breaks 5% of the time that's fine and nothing bad happens.
There's so much waste in the world, that it is unbelievable.
However counterintuitively, I have stopped caring about waste, and have been more focused on the value. Waste you can always optimize later if you want to, value creation is the difficult part.
So much this. But there is a fine line somewhere in there. I have seen and admittedly worked on such projects where time constraints simply do not allow you to polish all bits and pieces. That's perfectly reasonable during POC/early development stages and if anything, I encourage it: Writing hundreds of tests when all requirements change three times a day is incredibly counter-productive, slow you down and eventually burn you out. It may happen that crappy is much better than shiny, polished and over complicated if the project itself is not going to scale any further than it already has. And once you get a more complete picture of what the end goals are, then you can go back and gradually start doing things "the right way". But I've also been in another extreme. Take my old job for instance, which despite my at the time cognitive dissonance, I hated to a large degree because of this: relatively early stage with ever-changing requirements, endless dependencies, brutally unstable setup for development where deploying a single pod in minikube was a terrifying prospect because everything was hanging by a thread and rebuilding the cluster took hours. That was made even worse by dozens of forked open source projects that were patched left and right to fit the needs, lagging years behind the original projects, wild dependencies between repos, no real observability over what was going on, the version control was catastrophically packed with auto-generated code(like 80% or more) which was made worse by the fact that everything was git push --force so they don't have to deal with conflicts. Imagine having to explain that this practice should be avoided in nearly all cases. In a nutshell imagine crappy code and infrastructure which pretends to be and is sold as an enterprise grade software. I guess cognitive dissonance was a common theme in the company since everyone was under the impression that everything is perfect. Which couldn't have been further from the truth.
The lack of these design elements in a solo project does not define crappy code. More often, it's due to a codebase being modified by multiple contributors, each wanting to get their work done quickly without paying attention to the overall context or considering others' work.
Similarly, some entrepreneurs (non-tech)e who can produce a basic MVP that generates money from the start, sometimes to an insane MRR. They use something like Google Sheets, Bubble or pay a dev shop a fraction of a corporate project to get it done.
I can totally understand the move fast and break things mentality, but I'd like to stress it's equally important to pay back the occurring tech debts. I am working on a massive spring monolith that's somewhat of a huge pile of spaghetti, and when the higher management decided to enforce code quality in the pipeline, it became a living hell to work on.
I can't even add a logger without having to refactor the whole 2000 line class while writing a whole-ass unit test for it. It's been a full year and I still have to deal with this sort of crap on a weekly basis.
The most ironic part is that most of the devs that cooked the spaghetti are now either in upper management or in the same devOps team that's demanding us to clean the mess up.
I've come to the conclusion that you can't architect yourself out of this.
What you might do is write tests early, so at least you have integrated a test rig to build on if the system actually gets any use.
Most of the work on mature systems is like this, rewriting and refactoring stuff. If it is very concrete, non-abstract, it's generally easier to work with in this phase than if it was cleverly and abstractly architected. Even if it's a big spaghett.
In reality, the quality of execution is independent of the development approach - you can make a complete mess of either approach, however, the WHAT_THE_BOOK_SAYS approach WILL cost you a LOT more time and money to discover your team has messed up it and WILL cost a lot more time and money to fix.
My experience is that, just as your product evolves & grows, your engineering strategy should evolve and grow. There's a good reason why fortune 500 companies will have enormous IT teams, using enterprise cloud technology... and there's a good reason why successful tech start-ups do the exact opposite.... but hopefully, the startup will become a fortune 500 company.
I'm numb to it. It's just a fact of life because tenures are so short, quality is often very opinionated, and jobs are readily available, and you usually get a raise from the next job too. You do what you can to stay employed until you can get the next job lined up.
It has become overwhelmingly obvious that the industry is never going to reach a state where technical debt and bad code is exceptional and high quality is the norm. If there is a maximum amount of pessimism I could have, it would be for this.
I had only one job so far where code quality was a primary objective it was fantastic, but it was also extremely slow and expensive. Slower and more expensive even than what you're probably thinking right now. I was shocked.
...bikeshedding is when you have that all-day, all-teams meeting where marketing and design and management argue about whether to put X content in the header or footer.
So hard to resolve this in my head - unless it’s understood there’s a “detonate” button attached to the code, that it might escape, or worse, that it gets passed on or sold(!!) is a chilling thought.
Optimizing for 1M concurrent users, paying down tech debt, refactoring, and testing are all things that engineers love to cite that will somehow make everything better. The customer won't feel a single one.
Spotify uses 42 different versions of jquery (among a dozen other libraries) and it's working just fine (with >1M concurrent users).
Even electron! That crap is so heavy, its (lack of) performance is notable. But! Many billion-dollar companies built on its back.
I laughed when an exec at Evernote told me that every year they go to MIT to try to recruit the top 1% of cs grads. What a waste of talent.
A few years ago I worked on a system that had replaced a previous system that was like that: a bunch of microservices with multiple instances, communicating via message queues, all in the name of scalability and high availability.
The actual nonfunctional requirements? Handle between 0 and a few hundred requests per day, and it would be just fine of they get delayed a few days as well.
The best part was that the overengineered previous system actually had far more outages and delays caused by its complex deployment and data model than the simple single-instance monolith that replaced it.
I've been working for the same company 10 years. When we built project A, I was surrounded by people who had built the same product for the past 20 years together. As a younger dev, I deferred to them as they were much older and had done this before. You would probably admire them. They pumped out code and delivered immense value. We had no over-engineering because there was no upfront design discussions. When we needed to use some service, we just wrote that directly in our normal framework code instead of using some over engineered abstraction. That's usually highly praised here on HN.
Years later, that project became an extreme maintenance burden. There is no separation of business code from framework code, so the first major framework migration took... years. When they completed the framework migration it was already 2 versions out of date. Subsequent upgrades were faster and given more priority, but still took multiple months. Onboarding new clients took whole teams. The code is fragile, lacks test, etc. The app is classic "spaghetti code". But they sure got to production quick!
Around 2.5 years ago, I was tasked with building project B. There was no development team. I worked with two domain experts, but ultimately the development of the product was in my hands. This is a product that will live on for at least another 10 years and will likely overtake our core product in revenue. There is no one to blame, but myself, if in 5-10 years this app becomes another unmaintainable mess. If you're in this position, what do you do? For me, all I could do is draw on my experience to try to avoid the mistakes I've seen made in the past. We made a lot of mistakes in project A, but I knew a big one was tying our business code, the core of our business, to the particular flavor of web framework that happened to be popular at the time the project started. How do you avoid that? Well, "Hexagonal architecture" sounds like a big scary buzzword, but it's a damn simple pattern that solves that elegantly. When we started, I basically received an indirect mandate that we use the same framework as project A. After we had written ~25k LOC, I had enough and we dropped that legacy framework and migrated to something much better. And we didn't have to change a single line of our business code. Our new framework allows us to iterate much faster than legacy framework. 2.5 years in and our code looks as clean as that initial POC I built. Hexagonal was a small bit of the initial architecture I did. I only point it out here as a counter example. But all the little things I've done which are considered "architecture" have paid off years down the road.
I'd like to think that, most probably, your job exists and is funded because those folks (and you) "got shit done" for the last 20 years.
In my experience (around 2 decades also), when it comes to small/medium projects, spending a bit of effort separating business logic from UI/database/server code is wise up to a certain point.
If the effort doesn't add too much code and layers of indirection, then fine.
However, DDD/HexagonalArch consultants tend to add soo much boilerplate and layers of indirection between between an HTTP request and a response that it becomes detrimental to productivity.
Specially this part (but the entire article is a gem):
> This means that Dependency Injection and Dependency Inversion are not enough to decouple components, we will need some sort of architectural constructs. We might need events, a shared kernel, eventual consistency, and even a discovery service!
Respectfully I'd like to think that it is satire. But it isn't.
With events your code is no longer linear. Stuff happens with triggers (events) that are not clear at first glance unless you fully understand the entire stack.
Shared kernel is just a place to shove interfaces under the carpet and pretend code is no longer coupled because you added extra 2 layers of indirection (interfaces and kernel) between 2 lines of code. Congrats now it became much harder to understand what actual code will be executed when I call this method.
Eventual consistency... Is that a good thing? We spent 4 decades maturing ACID compliant transactional RDBMS only for someone to come and disregard all these gains?
Discovery service... Yeah your code became so complex that you need more code to discover code. Because we can't have one layer calling another directly. Nonono, that would be coupling. And coupling makes it hard to change frameworks in a distant but slightly possible future. I'd rather pay a very expensive insurance from now on to safeguard from that possibility. Even though the insurance itself is so expensive that it alone can suffocate the project.
Haha, I read through the beginning of that article which focused on hexagonal. While it was an over-complicated example, it didn't look too bad. But then skipped ahead to your quote. It definitely felt like satire to me! (Or at least I hope it was!).
And that's where a balance is needed. No one seems capable of defining what good architecture is. We have plenty of posts about spaghetti code, architecture astronauts, etc. But there's few comments with success stories, especially about fixing a spaghetti codebase and what that project looks years down the road. And I think that's because the problem is multi-faceted.
It's not "just" a technical problem. From a technical side, it's primarily lots of little patterns that as a whole add up to producing quality code. But there's also the social side, to hiring, reining in devs, enforcing the code patterns and quality, pushing back on business, etc. I can give lots of examples, but this would be a long post. All of these factors ultimately determine the fate of the project, the quality of the code, and whether it becomes spaghetti code or an over-engineered mess.
I've seen entire teams burn so much money by overcomplicating projects. Bikesheding about how to implement DDD, Hexagonal Architecture, design patterns, complex queues that would maybe one day be required if the company scaled 1000x, unnecessary eventual consistency that required so much machinery and man hours to keep data integrity under control. Some of these projects were so late in their deadlines that had to be cancelled.
And then I've seen one man projects copy pasting spaghetti code around like there's no tomorrow that had a working system within 1/10th of the budget.
Now I admire those who can just produce value without worrying too much about what's under the hood. Very important mindset for most startups. And a very humbling realization.