Hacker News new | past | comments | ask | show | jobs | submit login

I'm always thinking about "Can I (or anyone) get back into this easily 6 months from now?"

In my situation, I probably will have to do that so there's a selfish reason there for sure.

I recently had a whole series of frustrating situations where I dug through rediscovering how old code / systems work to make small changes or to find out the small change was enormous. Really deflating stuff. It's not my fault but it can be so demoralizing. Feels like a weight on you... I was done for the day after both of those horror shows.

Then yesterday I had a 3 day project start and in 2 hours I ... did the thing. It was super flexible / powerful, handled errors gracefully, and easy to change / test. All because a year ago someone (well myself and anther person) took the time to simplify the original spaghetti code that originally existed and break it into more digestible functional-esque chunks. Dropping something "in between the chunks" (fancy technical terms here) was easy to do, test and read. Completely the opposite experience, it was energizing and fun.




For my consulting, I primarily practice "reference first architectures."

The idea is we identify the rough shape of what we are going to build and the components needed to deliver it (Linux? Terraform? K8S? HTML/CSS/JS? etc.).

Next we measure up what we can "take for granted" for the engineering skillset the organization hires for. Then we pick books, official project documentation, etc. that will act as our "reference." We spend our upfront time pouring ourselves into this documentation and come away with a general "philosophy" of an approach to the architecture.

Then we build the architecture, updating our philosophy with our learnings along the way.

At the end of the project, we commit the philosophy to paper. We deliver the system we built, the philosophy behind it, and the stack of references used to build the system.

What this means is I can take any engineer at the target level they hire for, hand them the deliverable and say "go spend a week reading these, you'll come back with sufficient expertise to own this system."

It also acts as documentation for myself for future contracts if I get brought back in. Prior to starting the contract I can go back in and review all of those deliverables myself to hit the ground running once I'm back on the project.


Sounds like an architecture decision record. Here's an example ADR template: https://github.com/joelparkerhenderson/architecture-decision....


This sounds like the right way to do it. For me it has been tough to come up with principles that don't sound like they apply to any system. You start off with a generic CRUD app but as it grows the default/usual web framework constructs tend to leave you with a ball of mud. You can couple anything in there together and since you're pressed for time, you tend to do it. Abstractions feel premature and when they start emerging there's lack of conviction to push through with them and clean up the whole thing.

Do you have any starter resources to come up with principles for a system? Maybe something showing how certain principles lead you to implementation choices that would've been different under another philosophy.


For a 2 week Terraform audit, these are the high level philosophy points I put together. The final doc was 10 pages. Each point lists the reason for choosing this approach and any trade-offs that come with it.

* Small composable Terraform modules

* Don't manage IaC declarations alongside code in a polyrepo

* Direnv for managing env configs across repos

* Manage k8s using k8s manifests and not terraform files (kubectl provider gives us this)

* Delegate flux management to flux-cli

* Auto-unseal Vault to capture and protect the vault token

Then a list of recommended reading:

* Terraform Up and Running

* Building Microservices

* Site Reliability Engineering

This list is more tactical since we didn't build the system, we were auditing their current setup.


> Don't manage IaC declarations alongside code in a polyrepo

Can you elaborate on this one?

(And thanks for the interesting comments)


> The source code for services should not be coupled to their deployments when managed in SCM. The lifecycle of changes for infrastructure are different than the lifecycles of artifacts for services. Any artifact should be able to be configured and deployed into any (supported) infrastructure configuration. For example, running git revert on a service should be able to yield a deployable artifact regardless of how the infrastructure is configured. By coupling these, you tie changes to infrastructure to changes in services. A rollback for your service can also unintentionally rollback how that service gets deployed – and avoiding that requires an engineer hold both the context of the infrastructure and the context of the service in their head whenever they are manging git history. It becomes difficult to deploy an older version of a service for testing. It also breaks git-bisect since, now, searching for a regression in software also changes how that software is getting deployed. This is an extension of managing IaC as small composable modules. The source code for a service should, itself, be viewed as a composable module. That module may take on a different format than other pieces of IaC (i.e. a .tar.gz, a .deb, or a docker image instead of a terraform module or a terraform provider) – but it’s API contracts are still drawn around the unit of deployment and not the monolithic infrastructure stack that will be deployed with it.

This, of course, does not apply to projects using a monorepo.


how would one manage independent infra and services in a monorepo?


The philosophy is the same, but the implementation is different.

You still keep them separate but your monorepo tooling handles that separation. When multiple changes go out together, some being infra some being code, the tooling should be aware of those dependencies (just like any other) and handle resolving the infra first.

The muscle memory of devs in a monorepo tend to be different too. Folks are used to scoping their SCM changes to folders instead of working at the top level of the git repo (i.e. you usually don't find yourself doing a `git reset --hard HEAD~10` in a monorepo outside of a feature branch - other teams get grumpy when you blow away their changes on the mainline branch).

I make this distinction between polyrepos and monorepos for IaC because I've seen this advice result in folks splitting their monorepo into a birepo, or using IaC adoption as a driving reason to migrate their company to a polyrepo. There isn't anything wrong with the birepo approach, but it can be accomplished inside the monorepo all the same.


> All because a year ago someone (well myself and anther person) took the time

I've been saying for half a decade or longer:

"Going slower today means we can go faster tomorrow".

It took a long time for some of my team members to process this, but I believe they've all taken it to heart by now. The aggressive, rapid nature of a startup can make it very difficult to slow down enough to consider more boring, pedantic paths. Thinking carefully about this stuff can really suck today, but when its 3am on Saturday and production is down, it will all begin to make a lot of sense.

Having empathy for your future self/team is a superpower.


“Slow is smooth, smooth is fast.”


This has won countless races for just about every top F1 driver you can name for decades, prolly WRC too. That old analog world transfers nicely to digital in video gaming. Sadly, it's not more widely accepted in software development though software design and software deployment seem to have caught on.

As an old C++ hacker, I'm waiting for the day when modern C++ shops read Accelerated C++ from Koenig and Moo circa two decades ago. Then, I could rejoice in someone anywhere writing C++ code that more closely resembled the python-esque C++ masquerading as pseudo-code in that book.

More sadly, I just keep seeing people emulate bit-twiddling from yesteryear when the compiler likely optimizes a fair bit of this.

The cyclomatic complexity scores in the paper look off by an order-of-magnitude but they may be better than the laugh riot I've measured in the last few years and my math may be failing me at runtime.


Racing is a very flawed analogy: the big difference between software development and racing is that:

1. F1 paths are known in advance.

2. The major unknowns in F1 are your competitor behaviors.

Compare that to a typical startup: you're mostly riding in the dark on a track you see for the first time, and your major unknown is customer behavior.


Interestingly, I learned this adage in object manipulation (festival fire or LED dancing - hoops, poi, staff, etc). The community breaks down things into “flow” and “technique”. Flow is highly improvisational, tech is highly practiced, and you really cannot do one without the other, even if everyone had a lean.

So: the steps in the path are known in advance, but not the order, presence/absence, quantity, arrangement, etc. The major unknown is what you will do next. The best performers are highly reactive to, and involved with, the audience and colleagues (musicians).

(This ofc changes for choreographed performance)

As a full stack dev, I’ve got a stack of patterns (techniques) in my pockets to pull out for this or that situation, but I don’t really get to know which one will be the next one I’ll need. And I do my best work when I can get involved with the end users, interacting with them to grok their needs; and with my coworkers, so we’re a team.

Slow is smooth, smooth is fast.


Reductionist history doesn't help here. Software development goes twenty or thirty years beyond startups.

I've crashed and burned startups while never comparing any of them to driving half-blind without my glasses at night.

Human sciences and user research provide excellent solutions to customer behavior. Like F1 cockpits, the risk scales with the domain.

F1 is not just a vector sport. If it were, math may be enough to win. Turns out F1 takes engineering, mechanics, and a driver.

However, viewed through a macroscope, F1 dynamics are closer to a cooperative game, as in software development.

While an unknown in software is competition, much larger unknowns are given by shifts in teams, machines, and their methods.

Turns out that comparing F1 and software development from 1970 to 2020 are remarkably the same story for much the same reason. Neither exist in stasis.

F1 and software development have more in common than is obvious from the grandstand.


Yuup. Unfortunately, there's profit disincentives to this. Time to market for new features is a thing. Getting out features fast gets you kuddos from the suits. So you get a class of dev that spins out code like wickedly fast while at the same time leaving a mess for others to clean up.

It's hard to correct that sort of behavior (without being an actual manager that knows code and can spot bad architecture).


There’s a point in a company’s trajectory where quality becomes more important that quantity (speaking specifically of software features here). Early on it usually makes sense to throw things at a wall and see what sticks. But once there’s a sense of product market fit, the engineering org needs to buckle down and focus on doing things slow, methodically, and correct.

There’s also engineers that prefer doing these different kinds of work. They thrive on quick wins and kudos from founders. Early engineers probably need to be ok with bugs and edge cases that they’ll never go back and fix. Personally I don’t like doing work like that, but I’m definitely in the second class of engineers who needs systems to be modular, composable, and well defined.


Have you ever measured this alleged speedup when "tomorrow" comes?


OP did a 3 day task in 2 hours.


Measured relative to what?


You said:

"Going slower today means we can go faster tomorrow".

So I guess, relative to yesterday?


> I'm always thinking about "Can I (or anyone) get back into this easily 6 months from now?"

As I age, my memory is getting worse and worse and I realize that quite clearly. Therefore, I always try to write documentation as I'm writing code, so that I can remember why I did something. It helps a lot so that 6 months later, I can do exactly that... but I also know that anyone else looking at my stuff will also realize why things are the way they are.


I’m the same way, notes, good documentation, etc.

Sometimes I think I get some tasks done faster than when I was younger…


> I'm always thinking about "Can I (or anyone) get back into this easily 6 months from now?"

People I work with get very annoyed with me because of this, but I am obsessive about documentation for this reason. Sure, it requires a lot of tedious writing and screenshots, etc., but it has saved me countless times. I still can easily get back into things years later thanks to documentation.

The caveat is that when people who are not as passionate as you maintain a product and seemingly forget about documentation.

In the old days, documentation was a very strict requirement on many of projects I was involved with. Now, in modern agile projects, it’s an afterthought at best, despite having amazing documentation tools that we’ve never had before.


What ways have you found for keeping the documentation in sync across frequent changes?


Leadership, Process and Discipline


Exactly. As in, the documentation doesn't actually stay in sync.


Do you mind expanding on which tools you are using for documentation (creating, maintaining etc) please?


I’m a big fan of wiki-type tools such as Confluence, but Markdown is even better because it’s just code and can often be stored in the repo along with the code. There are of course pros and cons to both. Wikis are easier to use for more complex cases and especially for screenshot support, tables, charts, etc. On the other hand, Markdown is far more portable and better for long-term maintenance since it’s not subject to the whims of the documentation provider tool itself.

One thing I’ve done is to maintain a separate Git repo that only hosts documentation. This in combination with a simple UI that dynamically converts .md to HTML on-the-fly (or renders a cached version) seems to be a good compromise.


Like real clouds, the cloud wont look the same in 6 months. I got a stream of daily emails from Azure, end of life this, upgrade that, secure this etc. Those bits will rot unfortunately.

There might be sense in renting that machine from Herzner and sticking Ubuntu on it after all.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: