This is incredibly simple yet incredibly powerful, and something that everyone who becomes proficient at delivering things of value learns eventually, but is rarely taught so succinctly.
By the way, for the programming case, this is a big part of the reason functional programming is so powerful. Avoiding shared state allows you to write your outline of smaller and smaller pieces, then write each piece as a stateless function, then pipe your data through a graph of these functions.
With other programming paradigms, you can't just knock out all the little pieces without thinking about the other pieces because the state is all tangled up. Which slows you down.
It's surprising how very simple the individual components can be, even for very complex systems, when following this approach.
I’m not so convinced that this is a property of functional programming as much as simply good programming. I’ve seen functional programs that pass around huge data structures that couple functionality. I’ve never seen the benefit of performing elaborate monadic dances to avoid state that would have been simpler to represent in a non-functional language.
One benefit there if it's statically typed is that you've pushed some correctness validation into the compiler.
In dynamic languages like Clojure it is far too easy to couple functionality and write implicitly imperative but superficially functional code. Something I'm guilty of because it's too easy to do and takes a lot of experience and discipline to avoid.
Well this seems to me a violation of the spirit of functional programming, even if it's written in a functional programming language.
Isolating state is the key principle. Philosophically you could say a program following this principle written in an object oriented or procedural language is more "functional" than a program that passes around a big tangle of state in a single data structure written in a functional language.
Yes, indeed. That’s my point really. Functional programming only delivers its promised benefits in the hands of skilled users. We’re just not used to seeing that given that it’s at a self-selecting crowd. Lambdas, ADTs, immutability - all of these exist in non-functional languages now and it’s great! It seems to me that this may be where the bulk of the value is, rather than functional purity.
Yes. Ideally a codebase is just a monorepo of pure functions and apps are simply the control flow that weaves them. Write reusable libraries not microservices (silos).
Sadly incentives are not aligned for this at scale - easier to buy cloud SaaS n+1 whose sales team insist "this $badware solves a Hard Problem" while your devs sit in ceremonies all day.
That's why Paul Graham started Y Combinator and handed the entrepreneurial reins to the developers instead of the project managers and MBAs and made billions in the process.
maybe that was true once, as I understand it current YC is just a finishing school for t1 grads to pad the CV between graduation and working as a "thought leader" (lol)
To be fair to other styles, we generally learned already that shared state is bad. It's avoided in basically every language/service these days. It may be enforced more strictly with functional programming. But "With other programming paradigms, you can't just knock out all the little pieces without thinking about the other pieces" is taking it too far. For example, you want to add email sending to your app? The library for it is a little piece of software independent of the rest.
> To be fair to other styles, we generally learned already that shared state is bad. It's avoided in basically every language/service these days.
I don't know how true that really is. JavaScript added `class` relatively late in it's lifespan, and most JavaScript/TypeScript projects use `class` to hide shared state/mutations behind that interface, in the name of `encapsulation`, rather than just passing stateless data from function to function.
Both patterns obviously has their places, no silver bullets and all that, but you'll have a hard time finding any relatively popular JS/TS project that doesn't overtly rely on shared state one way or another.
Objects with instance variables don't automatically mean the state is shared. A JS event loop holding onto a simple object with some data is not practically different from a functional event loop with some big context carried between executions. One will have mutations, the other will do effectively the same with tail call and some IO executor. In both you have to add the shared state explicitly, one just makes it way harder.
> you'll have a hard time finding any relatively popular JS/TS project that doesn't overtly rely on shared state one way or another
That's sampling bias. Where's the thousands of the event driven DOM UI Haskell projects, which doesn't rely on shared state?
> Objects with instance variables don't automatically mean the state is shared.
Agree, but JS's syntax and constructs are almost begging the user to encapsulate their state in classes and instances and share those instances between function calls, rather than passing the data itself around. This is very visible if you browse the various JS APIs as well.
Can't "sendEmail" be a function near the edge of the network that does the IO of actually pushing the bytes across the network, decoupled from the "pure" functions at the heart of your application?
> With other programming paradigms, you can't just knock out all the little pieces without thinking about the other pieces because the state is all tangled up. Which slows you down.
I don't think so. I use the style of programming described in the post, and my code is mostly OOP, but almost entirely without mutable state. You may claim "but that's not OOP", but I would reply that FP is not about having no shared state either (which a lot of people were quick to tell me when I myself made the mistake of confusing immutability with FP, as almost all FP languages allow mutation without much cerimony), that's just something encouraged in FP, and it's something that can be easily encouraged in OOP as well.
I think FP has a much stronger emphasis on avoiding shared state than classical OOP precepts. Smalltalk style OOP is about encapsulating state in objects, which is queried or implicitly updated by sending messages to the object. Whereas FP emphasizes a cleaner separation of functions and the data structures those functions operate on.
Interestingly, Erlang is very much a functional language. But Erlang procedures are a lot like Smalltalk objects, as explained by Joe Armstrong. You send messages to a pid (process id) which then update some state held by the process and possibly sends back a value in response. But the new state is always computed by a pure function.
With other programming paradigms, you can't just knock out all the little pieces without thinking about the other pieces because the state is all tangled up. Which slows you down.
Of course you can. I don't know why people think you can't write functions that don't change a global state in any programming language. Pretty much any experienced programmer does that whenever they can.
To tack on to the other responses, this is just good programming. The testability/maintainability/extendability tenets push you to write small pieces of functionality before wiring it all together.
This is how I work on my projects as an indie dev. When I start working on something significant (a new feature, for instance), I'll create a markdown file that has a summary of what I'm trying to achieve and then a TODOs section which turns into this massive outline of all the tasks that I'll need to do to complete the work.
At first, the outline just has a few tasks that are fairly high-level, but as I dive into each one, I add more nested sub-tasks. The nesting keeps going until I end up with sort of leaf nodes that can be done without depending on other tasks. This gives me a nice visual of how complex some tasks are versus others.
I generally prototype and design the implementation at the same time, and having this outline gives me a place to "dump" tasks and other work I'll need to do later, and you do often encounter more work than you expect, so an outline makes it easier to find a good "parent" for the task. Having a big outline also lets me jump around from high-level design to low-level implementation easily as well, which you need if you're prototyping and trying to find the right shape for your solution.
It's great as a motivator too since I can see when I complete something big when I check off a parent task that has a lot of nested children.
I find a simple text file outline like this is so much more convenient than say an app or a web UI since I can just jump around the file and cut and paste outlined sections and indent or un-indent them to re-parent them. (Having to use something like JIRA to do this would be way too slow, especially when you're in a flow state.)
Same here. I wrote a little multitree-based TUI with vim-adjacent key bindings for exactly this purpose, since I find it generalises to all complex projects, software-related or not (and who can resist any excuse to write a TUI?), but a simple markdown file is just as good, really, and for software means you can keep it in the repo directly adjacent to other project docs.
Generally, with this type of work (where I'm trying to go fast), I have to be flexible, so I will often just let nested tasks "die off" after I've found alternative ways of solving the problem or I've changed the idea a bit.
Sometimes I'll delete the nested entries outright, but usually I'll just keep them around until I get to a point where I'm close to completing the feature and then I'll re-visit them to see if they still apply or if I need to reincorporate them into the new design.
This looks very cool, especially for hobby projects. I follow approximately the same flow with infinitely nested TODOs in logseq.
The cli tree flow is very likely better, but those destructive pops -- it would be hard for me to let go of the ability to look back at the end of the day retrospectively and see the path that was explored.
> The cli tree flow is very likely better, but those destructive pops -- it would be hard for me to let go of the ability to look back at the end of the day retrospectively and see the path that was explored.
It's a trade-off: aggressively pruning the noise leaves a lot of signal. I have also found that, when writing down goals/objectives/tasks/whatever, knowing in advance that they are going to be discarded once done makes them more focused on achieving the goal, rather than trying to document what is done.
Essentially, when adding nodes, I add directives to be filled, not documentation for what was done. This keeps me focused on achieving the goal without getting side-tracked by putting in explanatory documentation for future me.
The notes I make are to allow future me to implement $thing, not future me to understand $thing.
I thought that might be the case, and I did stop to wonder if I just wanted to see the path out of pure self gratification or if there's something valuable in taking a step back and assessing the process after the fact.
i use a modified form of https://xit.jotaen.net/ for my task lists. xit uses [~] notation for obsolete tasks. sometimes an entire branch gets this. i also avoid fleshing out tasks in detail until i've settled on the design for the higher level goal.
Similar here - i use asana or linear for highlevel planning with a calendar and then as I write code I drop in TODOs and FIXMEs and such then just grep them out or use a VS Code extension called "TODO Tree" to track them.
I have done something similar on the past and I was very happy with the results. At the time I was starting up a consulting business and got the first few gigs directly from engagement with my blog.
I also time boxed myself when writing. I wouldn't write unless I had a really clear topic in mind, then I'd give myself an hour to get it published. A few times I ran out of time and ended it with a "follow-up post coming soon to dive into ___" type message and that worked just fine.
The other provides a very good example in one of the video illustrations, with the left hand side showing "loading bar" writing and right hand side simultaneously showing "outline speed running" writing.
This is a good way to maximize speed. I'm not convinced it's also a good way to master quality. Rushing ("speedrunning") to a first working version may force you to choose sub-optimal paradigms (algorithms, data types, etc.) that you won't have the time or the will to correct later.
I'd even postulate that's why we have so many crap applications today that are first on the market but slow, inefficient and user unfriendly.
If premature optimization is the root of all evils, totally disregarding it leads to painful refactoring.
I think it's the opposite. I think quality often comes from evolution and iteration.
There've been so many projects where I get stuck because I want to maximize quality, so I get writer's block. The worse, is that sometimes you'll try to perfect something on your project that ultimately isn't of great value.
Building something quickly, and then iterating to perfect it seems to work for many people.
This is true for most things in life. People spend days and weeks on the logo, that the actual product doesn’t get off the ground. People spend so much time planning the perfect vacation, that it never happens. And so on.
Truth is, for most things in life, good enough is just good enough. Lots of things we do have a short shelf life anyways.
I guess deciding the right level of goodness (or perfectness) of the tasks/projects we do in life is a big skill in itself
And what many people of either side forget: both are not a one size fits all. There are some things that need planning up front (a car or a rocket) and some things can be done agile and iteratively. Likewise, some things can't be made via solopreneurship/indiehacking and some things can't be achieved with classic VC-backed entrepreneurship. There's a time for both.
There’s the stories of college professors who split their class into two groups, one group that is graded on quality of a single photo/pottery submission, and the other group graded solely on quantity of work produced, and the group tasked with producing quantity always produces higher quality.
I guess I don’t see why building a car or rocket would be different, other than we now know how to do it well.
When people were first building rockets, it was just a blooper real of failures.
Is there some distinction along figuring out the theory/physics, versus figuring out the application, real world, material science angle? Like I could see spending a long time on the theory side, but once that’s understood, it seems like figuring out which materials can produce the required physics is quick iteration’s bread-and-butter.
> I'd even postulate that's why we have so many crap applications today that are first on the market but slow, inefficient and user unfriendly.
That’s certainly one way to get a crappy application. Another way is to find optimal paradigms only to discover that the problem that needs to be solved has changed and now the optimal paradigms are technical debt that needs to be worked around.
It can definitely lead to under-optimized code, but on the flip side, prematurely optimizing can waste time and lead to overly complex code that is difficult to maintain. The key is to know how much to optimize and when.
The point of the article isn't to show you how to produce a shoddy first version as soon as possible, but rather how to avoid things like analysis paralysis and prematurely focusing on style over substance. This applies not just to code but to pretty much anything you create.
By completing a skeleton as soon as possible, you get a better idea of the different components you'll need and how they will interact, before you flesh any of them out. I think there is real value in this approach.
Agree. In the context of software development, you might choose different tools (programming language especially) if your goal is rapid application development rather than general high quality and long-term maintainability. You can't easily go back and change those decisions.
This is one of the perennial software development questions: to what extent can you improve an existing solution with a flawed or now-inappropriate architecture or implementation? This topic turned up a couple of months ago. [0]
Much of the reason sucky applications suck is because the people who work on them can't change them quickly enough. If you can open up your IDE, grab a flame graph, and chuck out your shitty brute-force algorithm in favour of a dynamic programming one that you thought of in the shower, then one Friday morning you're likely to do just that.
I suspect that the “crap applications” issue arises not necessarily due to the method being wrong, but more likely due to people disregarding step 4 in the article: “Finally, once completely done, go back and perfect”.
It may be because of tight deadlines, lazyness (it’s “good enough” so why bother?) or eagerness to jump to the next project (because it is more exciting or profitable than doing the hard work of getting the details right).
I guess there is also a personality type factor that plays into it, because many people seem to just care about the hard requirements and cannot be bothered about things like performance, accessibility, design consistency, simplicity, maintainability, good documentation, etc., at least as long as nobody complains about it.
I'm not as much of an overhead strategist, but I do have a rule that I follow that matches this article: if I hesitate to start working on a problem because it seems too difficult, it's because that problem has not yet been broken into small enough parts.
I tend to hesitate because I know exactly that it will be a lot of long and difficult work to break everything down into small enough parts, of which there will be a whole lot, and work through them and integrate them all.
I agree, I follow the same principle. Also i would like to extend it to - "if you slow down when working on a problem, you might have stumbled upon something unexpected, identify it, and break it down.
I have a similar rule when writing documentation. As soon as I find myself writing something in the passive voice, I know I’ve hit part of the system I don’t really understand. “This event happens” instead of “subsystem A triggers this event”.
Nitpick: “This event happens” is not in the passive voice. “This event is triggered” is in the passive voice — and so is “this event is triggered by subsystem A”. (What you probably mean is “writing something vague or lacking agency”.)
I've been diving into the science of learning, and the blog author clearly knows their stuff. For those intrigued by this field, here are some fascinating concepts worth exploring:
- if you are pushing technical boundaries you may need to prove something is achievable before you go back and do the comparatively easy stuff (build the website, set up the company etc.)
- As context switching creates a huge cognitive load it can be useful to create discrete chunks of work and not just jump around all the time.
The concepts are very similar to those presented in “How to Read a Book”[0].
The general gist is: create a mental outline of the book/material (via the table of contents), dive into interesting chapters, resurface to the outline, dive again, etc.
This strategy is useful for quickly building a mental model and using your own interest as the guiding light.
Similar to building quickly, the focus is on your attention/what’s most important, rather than traversing from the beginning to the end serially.
I’m constantly amazed at how differently I learned to do things from my father than from school.
My father had all sorts of approaches similar to this, and it’s how I learned to write essays (outside-in) and research (inside-out), and which I later applied to programming. It made school trivial and fun, and it’s what I’m teaching my kids.
I think it’s often called the “tracer bullet principle” as well. Get a full version of your system working quickly from end to end, then improve each part incrementally. Powerful stuff indeed, also for your motivation and sense of accomplishment. Nothing sucks the joy out of work more than building and building and not getting any feedback.
What's interesting about this is, I have always done what the author describes and I just assumed when people wrote (for example) an essay, they would outline all the points and the structure first and then go and fill in each section and refine it over time. Same with ideas and projects, I would do rough outlines, then add fidelity. Same with programing, I'll make an outline and go and refine it all.
It's strange, I assumed this so strong I never thought anyone would ever start writing and essay from the beginning without considering more of it, similar to just starting projects or code, I guess they do and it works really well.
What's the break down of people's approches to things here, which bucket are you in?
> It's strange, I assumed this so strong I never thought anyone would ever start writing and essay from the beginning without considering more of it, similar to just starting projects or code, I guess they do and it works really well.
It really depends on the task/text at hand. With some things I set out to do, I know I want to reach a concrete goal, then it's easy to do an outline first with the finish point being the known goal, then fill out the middle-pieces.
But at other points, you're not 100% sure what the goal is. Then starting from the beginning and just going with the flow until you reach something that feels like the goal is the way to go, and you'll adjust as you go along.
Other times a mix between the two is optimal, where you think you knew what the goal was, but as you're half-way of filling out the outline, you see that another goal would be better fitted, and you adjust. Or you realize what the goal was all-along as you're nearing the finish, and you go back and adjust.
Basically, there is no single path/process that fits all types of problems or people even. You try out different things until you find the way(s) that fit you and the stuff you typically handle.
I had a boss who was a very good programmer and a writer. He used to spend hours just writing the table of contents, hours. Once he is satisfied, he will finish writing the actual text very, very fast.
While he would write and rewrite ToC multiple times, he rarely edited the actual content, no matter how long it was.
I suppose different strategies work for different people
As a hobby project, I started a market research/overview of the Belgian cybersecurity ecosystem [1].
This required me to write a lot more than before, although I've always enjoyed writing.
In the beginning, I wrote beginning -> end, with just a high outline in my mind. Now, I write bullets first and then expand into paragraphs.
This has helped me write a lot quicker and I think the articles have become easier to read (which matters a lot online, where everyone reads diagonally).
This is a great article that summarizes a method I’ve already used for my work over the years. When writing a new project from scratch ill make a bunch of structure (defining modules, writing pseudo code) then start to fill things out piece by piece. Often times ill need to adjust the structure as I go but helps for building a mental model of a new project (at least for me)
not OP, but I do something similar with polylith[0]. Example structures [1]
I'll create the base directories(e.g. www, api, auth).
Then the components (e.g config, data, geo, mailer, utils, web etc)
In each component I'll make a readme.md with what the component should do. Sometimes this leads to large components and when that happens I break the component directory into smaller ones (e.g. web-client, web-server, web-routes, web-middleware etc) and add a readme to those. Then (what I planned to do but usually skip) add function names to the interface file based on the readme, then work on the implementation (I usually end up going straight to this and wish I had created the interface "guide" cause now I've gone off track and need to clean it up)
Not sure if this is a common way to polylith, or if I'm doing it wrong. It helps me keep track better than trying to search through outlines and notes that are scattered all over the place, or in an app I dont feel like opening or logging into, usually ending with me re-writing the same thing 2-3+ times.
The word recursively does a lot of work in the post.
Every project I go into thinking I can do it quick and it never works that way because the minimal viable or minimum lovable thing is a long way from the minimum actual concept of what I have in mind. I feel that I need to build enough that the user is engaged with it.
I feel like those first explorers willing to try out a new thing are incredibly valuable and their attention should not be wasted on the outline, but they should be presented with the owl.
Well, that's because it is how you draw an owl. You start with basic outlines to define size, pose and proportion, then you divide those outlined areas into smaller sections that locate important details like eyes, beak, ears, wings, etc. then you add rough detail to those parts which further breaks it down and you keep iterating like that until you fill in all of the fine details.
To me, MVP means "Do part A to a good enough level of perfection where customers will buy it, then release and get customers, support them while then working on part B as a value-add.." - often, in my mind, all parts, ABC add up to the whole package value. I don't want to break it apart.
I'm actually trying to do more planning and outlining first, and it has worked reasonably well for me.
Start with an outline of the different systems and how they connect, then outline one of those systems, the inputs and outputs to it. Then break down further into how does it get the input, where does it put the output, etc.
It has been remarkable in that I can actually feel joy for what I'm making again. I have also tried to start a blog with doing this type of "open design" - but blog writing requires its own planning and refining, which is an extra workload that I didn't intend to put on myself..
You can only learn how to draw owls by drawing owls, repeatedly. The more owls you draw, the better you will get so it helps to draw quickly and often.
It feels like the article is describing how to draw an owl, but in reverse. You know what the owl should look like, and you simplify all the way down until each step is little more than a tiny pencil mark.
The first explorers will to try a new thing, as you say, will also be quick to try the next new thing. So they’re probably less valuable than you’d expect.
I did something like that the first time I had to write a device driver, but I did it kind of stupidly.
It was in college and I had a part time job doing doing system administration and programming for the high energy physics department. They had an RX02 8" floppy drive that they wanted to use on their VAX 11/780 which was running Unix/32V and I was assigned to write the driver.
I basically started with a C file that just had empty functions for all the functions that I knew Unix expected a driver to have, and then started filling those functions with comments recording what I had figured out that they had to do.
Each started with just a few high level comments. Then I'd add more comments breaking those down, and so on, until I finally had all the information I'd need in there and could start coding.
That's when I then did something stupid. As I started implementing the things the comments described I replaced the comments with the code.
I got about half way through before I realized that I should adding the code below the comments rather than replacing the comments.
This really works. When I used to work at big tech, I had a reputation for being incredibly fast and this is the method I used.
This is also one of the reasons why I never moved away from Workflowy as an outlining tool. Nothing else has come close to it.
If I have to add one thing, it is that when you are recursively building your outline, it might grow really big and that might overwhelm you so I recommend a pruning step where you prune some of the nodes to make the overall outline tighter as a final step before proceeding on building. But do not worry too much and cut nodes ruthlessly. Often times you can get stuck at this point if you think too much.
Honestly I keep things pretty simple. Just the fast outlining features and the "mirror"[1] is what I use the most to mirror the current "speedrun" on the top level for next day so when I come in I don't worry about what to start.
For me, this approach works great with one enormous exception: I must already know exactly what I'm going to write about.
I have tried outlining my writing countless times. But inevitably, the real work of thinking meticulously comes with the writing itself. In composing prose at the finest level of detail, I discover the true shape of the topic through its nuance.
I always throw out my outlines, no matter how many times I have iterated on them. My high level thinking couldn't sufficiently understand the topic.
PG expressed this well: writing is thinking, at least for some of us.
This is classic. Just move up and down the ladder of abstraction or a tree, collapse a node if its children are ok. If not, expand, fix issues, collapse it and move to another node in the tree or graph.
In particular if you have to build on existing systems, a top-down approach doesn’t always work well, because the overall design may well depend on details of the existing parts you have to integrate with. In that case, starting with prototyping a vertical slice of the functionality can be the better approach.
Exactly. I've found that even with a greenfield project, there is the tension between keeping things simple and avoiding fully-engineering code so as to quickly get to an MVP, and the fact that code that is under-engineered is creating technical debt that becomes more ossified the more you build on top of it.
My current thinking on a solution to this conundrum is this: try to craft the best architecture and engineering you can up-front _vertically_, but drastically reduce the workload by paring things down _horizontally_.
I really need to master this. I spend absurd amounts of time thinking about the littlest things. It can take a long time for me to mentally accept that the code is fine and ready to be committed to the repository and published.
I do that in a similar way. I start doing all parts at the same time, going back and forth until it’s done. It’s not a perfect approach though. Two downsides I sometimes encounter are: dependency, when most or some of the work or scope depends solely on perfecting another part, and if the work is complicated, you can get extremely overwhelmed to the point you don’t know where to start or even start at all.
Frankly, this article might be better because it's very short and encourages you to go out and put it into practice immediately. Which is arguably more valuable than reading an entire book to make the same point.
I wouldn't say that this article and the book just "make the same point". Zerubavel devotes an entire chapter to explain how to get a good estimate of the time required to complete the project, another chapter provides tips about how to track progress efficiently, etc.
I think there's tremendous value in keeping and publishing the outline itself. I know this because I just spent a week turning a book back into an outline and discovered that there was a great demand for it.
I do this for writing, whether it's a book, an article or a D&D campaign. For me it's not about speed at all, it's the natural way my mind works and if I try to do this linear, line by line, I face blockades soon in the process.
There is the spirit of "divide and conquer" in this method. Which thing is good.
However, I do not know how to follow this advice: "DO NOT PERFECT AS YOU GO. This is a huge and common mistake" because, when you develop something critical and apply TDD, testing actually is a synonym of perfecting your approach to solving an issue. Not to mention that testing comes before code itself: it pushes you to think carefully, and come out with the best strategy before iterating any further.
Well, for example, one could not systematically test everything in the first pass but a few things, and then come back later to write more tests.
The first draft can, should even, be sketchy: it doesn't have to work all the way through; one can think of it as a prototype to help you understand the problem better[0]. It can even be discarded, for example if it's too far away from what is actually needed.
In that setting, TDD might be more suitable for a second or a third pass, once you have a solid grasp of the code's structure.
Sadly for me I don't find execution on a clear idea to be the problem most of the time in my work. It's choosing the right idea to execute on that's hard.
I have already been doing this. My problem is that I somehow obsess too much over the Outlinining and by the time I come to filling in the outlines, I lose steam at about 30%-40% completion mark.
Then, it feels very boring to finish up the filling of outlines and I get a feeling that dragging myself over a bed of gravel is easier than finishing up the whole thing.
This is almost a 100% "divide and conquer", but a good article with concrete examples.
Definition : "A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem."
Its trickier to scale for sure. I have actually seen something very similar work really well at an early startup. The team was around 5-6 people and fully remote at the time.
The real key is communication and trust. It only worked well for us because we communicated frequently, the CTO/lead was really good at acting as a conduit between everyone working on related projects, and we all really offered each other the trust to let each other run with it and adjust on the fly.
It was really common for us to go into a week with a clear idea of what the next feature to build was, only to end the week with the person working on it finding a different path or a reason not to build that at all, often with an alternative to propose. We'd discuss it as a team when needed, but it was always clear that the person who was working on it was the expert on it at the moment and had a lot of sway in direction for that feature.
It isn't easy, and it did actually fall apart when the team scaled to 10+ people, but there may have been ways we could have adjusted and avoided that too.
And here I was, having been led to believe there’s No Silver Bullet. :(
Also lol at the comment that speedrunning and iteration lets you get the junk out of the way so you can really focus on where quality matters… a paragraph away from saying speedrun and iterate and then feel good about being 90% done while you sit around twiddling with the title bar styling.
Good advice, but it also depends on the size of the project. For a very large project you may have to do it in phases, i.e. start with a very high-level breakdown, and then proceed with phase 1 and park whole sub systems for later.
This is great. Top-down break down followed by bottom-up building. Resonates of the approach advocated by Paul Graham for building software in the book On Lisp.
After reading the article, it made me realise this is why Power Automate and most of those low-code platforms are inefficient as they force you to do things "loading bar" style.
This reads like a strategy for creating filler content. If you have to write a school essay an outline helps you churn through all the BS. By contrast, when you try to write something meaningful almost all effort goes into two things (a) figuring out what you actually have to say and (b) finding the right words to express it. School essays are written by people who don't have anything to say. Intro fluff. Chapter one fluff. Conclusion fluff. It's not real writing. You can speedrun it because no thinking is involved.
The same applies to fluff software, but only to fluff software. If you have to create a page with a dozen buttons with a bunch of click handlers, hook those up to basic AJAX calls, then yes, you can speedrun that as well. Because it's extremely easy work that involves no thinking.
Many things in life are kind of mundane and tedious. Fake school work, taxes, cleaning, ironing, to give just a few examples. And having strategies to blast through that kind of work effectively is useful. But these strategies absolutely unhelpful when you're trying to do anything creative or difficult.
You can't write a great spy novel with an outline like: 'introduce spy character', 'successful mission 1 in present time', 'flash back to failed earlier mission', 'introduction of big bad guy', 'flash back to tragic backstory', 'spy gets assigned special mission only his handler knows about'. Filling out an outline like this produces an uninspired, boring, formulaic trash. You end up with bad airport reading, the equivalent of AI slop.
To give another example: PG spends MONTHS on a single essay. He is not bottlenecked by a missing outline. Speedrunning the wiring process doesn't help. Figuring out exactly what you want to say is the hard part. Putting words on paper is trivial in comparison.
It's interesting that you use Paul Graham as an example. Another comment in this thread shows him using the outline technique for one of his essays: https://news.ycombinator.com/item?id=41149003
Yes, he didn't come up with the ideas from scratch as he was writing them. It took years to gain the experience and expertise. Yet he still saw the value in outlining when the time came to write it down.
> Filling out an outline like this produces an uninspired, boring, formulaic trash
So write a better outline. George R. R. Martin and Agatha Christie both used extensive notes and outlines as part of their writing process.
> She made endless notes in dozens of notebooks, jotting down erratic ideas and potential plots and characters as they came to her “I usually have about half a dozen (notebooks) on hand and I used to make notes in them of ideas that struck me, or about some poison or drug, or a clever little bit of swindling that I had read about in the paper”.
> She spent the majority of time with each book working out all the plot details and clues in her head or her notebooks before she actually started writing.
I think the main confusion is about the speedrunning aspect. You don't speedrun gathering thoughts, you speedrun turning those thoughts (that took weeks, months, or years to develop) into writing. You may not struggle with writing, but a lot of people absolutely are.
I think that's an overly uncharitable read on this approach. Lots of tasks that have difficult thoughts, that need to be thought before they can be completed, also have phases in which work just has to be done. I'm in the middle of collaborating on an article for submission to a physics journal. I wouldn't term it filler work, but most of the complex thoughts on the problem have been thought through and the work right now is creating a coherent story that goes over our results. An outline method would work fine for this part of the project.
As for the spy novel, i think the outlining is actually quite similar to how Sylvester Stallone described his writing process[0]. You wouldn't fill the outline with generic beats, you would put in your basic plan for the story.
It sounds like we are mostly in agreement, actually. Mathematicians don't start by creating an outline of a paper they have to write. They start by proving a theorem of some kind -- that's the part that involves thinking hard -- and only after they have something worth publishing does it make sense to think in terms of an outline. Proving the theorem take take a mathematician months (or a lifetime). Writing the paper takes an afternoon.
It's the same for software. By time time you understand the problem well enough that you can write down a list of things to be done you're already way past the "thinking hard" stage.
Sylvester Stallone wrote the script for rocky in 3 days. He could do this because he had already figured out the concepts, the theme, the characters and their personalities way ahead of time. He had worked on it in his head for years. By the time he started typing 90% of the work was already done. Nothing Stallone wrote later in his career was as good as his original rocky script.
I like this idea a lot! I’ll try it today. I think a version of this is how I, and most professionals(?) already work. But I do believe my process can be sharpened.
i think the education system encourages the loading bar style. we are taught how to answer questions. i think AI will push education more towards the second type where the emphasis on asking questions.
like in a tech interview, instead of asking you some leetcode puzzle, i will ask you to ask me questions about the subject to demonstrate your knowledge. kind of socratic style.
yeah. there's a place for coding interviews but I'm lucky in that the one I give is basically tell me about something you know a lot about, and find how strong people are at their strengths.
I'm pretty sure LLMs have to generate text left-to-right, but there was a screen recording floating around that had "multiple cursors" of text generation at once, where it looked like the LLM was filling in a grid of text, so to speak. I'll see if I can find it.
3b. Realize that your outline is wrong. You did not fully understand the problem when you wrote it. Go back to step 1.
5. Realize that you are never completely done. You will not go back and perfect.
The result is... better than not making an outline, but it's hardly a game-changing approach.
I'm reminded of something that Mike Bloomberg wrote in his book: Make a detailed plan for your work. Write down every detail until you're satisfied that you know how to proceed. Then throw the plan away, because it is now worthless.
By the way, for the programming case, this is a big part of the reason functional programming is so powerful. Avoiding shared state allows you to write your outline of smaller and smaller pieces, then write each piece as a stateless function, then pipe your data through a graph of these functions.
With other programming paradigms, you can't just knock out all the little pieces without thinking about the other pieces because the state is all tangled up. Which slows you down.
It's surprising how very simple the individual components can be, even for very complex systems, when following this approach.