Hacker News new | past | comments | ask | show | jobs | submit login
Systems design explains the world: volume 1 (apenwarr.ca)
337 points by peapicker on Dec 27, 2020 | hide | past | favorite | 152 comments



One of the greatest System Design stories of all time is the evolution of mammals.

Warm blood? A huge, complicated heart? Are you insane? The metabolism rates will be off the charts.

Record-breaking spin-up times before adulthood? Where the parent animals will have to actively care for their helpless young? And spend bodily resources to produce a fluid with which to constantly feed each one? How could they ever compete with a species that lays 10,000 eggs a year, dashing off after — or before — they hatch?

And yet, as They Might Be Giants put it, these disruptive designs, so seemingly flawed, made the difference between extinction in the cold and explosive, radiating growth.

P.S. Did you recognize the chicken-and-egg problem in the second paragraph?


To me, this speaks to something entirely different: the idea that a complex system that works cannot be designed from the ground up, but must necessarily evolve from a simpler system that works.



There are examples of complex systems created in a short time:

1. Manhattan project: The Hiroshima bomb was only tested once. The Nagasaki bomb was a different kind and never tested in advance. The whole infrastructure to produce those bombs was created from scratch within three years.

2. ICMBs: It required to shrink down nuclear bombs to put them on rockets, develop the capability to steer rockets precisely enough over thousands of kilometer, create a whole industry around the manufacturing. Then scale it up to hundreds of rockets.

3. Put a man on the moon. Nobody died at the first attempt.

You could consider those three steps as an example of Gall's Law as they build on each other. However, I would say each of them is a complex system in itself.


I would argue that none of these things are complex systems, in the systems theory sense. A complex system is typically comprised of a network of casual relationships so deeply interconnected that its behaviour is not easily reducible (in the scientific sense of isolating individual behaviours to be examined). Human designs intentionally avoid this and instead strive for mostly linear causality, precisely so we can test and develop individual parts in isolation.

For an example of a complex system (and how unintelligible they are), see this diagram of metabolic pathways in the human body: https://www.sigmaaldrich.com/content/dam/sigma-aldrich/artic...


It depends on the meaning of "from the ground up".


Is that how jets are designed?


They don’t call the F-22 and F-35 fifth generation fighters for no reason. There have been many iterative improvements since the ME-262 and Gloster Meteor that would not have happened without the intermediate steps.


yes, jets are composed of thousands of components, each attached to each other in assemblies. The assemblies are then integrated in such a way as to perform the functions of a jet: providing thrust, providing lift, steering, holding fuel, etc.


Most of these systems are evolutions from previous designs, however. How much is completely new?


To be fair, the last statement about extinction in the cold and explosive growth is largely due to the size of the brain. Humans figured out techniques to survive in the cold even while being warm blooded. No thick fur is necessary when we can just kill an animal and use its skin to get warmth. Then we discovered fire.


I'm talking about a disruption story that long predated humans: https://genius.com/7391311


> How could they ever compete with a species that lays 10,000 eggs a year, dashing off after — or before — they hatch?

This one sounds easy: by eating those unprotected eggs!


Well, yes — cold-blooded creatures usually have/had to go dormant at night (too cold to function) and they and their eggs were/are easy pickins for any warm-blooded mammal able to operate at full capacity during nocturnal hours.


They aren't flawed designs, they are simply optimized for better survivability of each individual offspring, at the cost of having fewer offspring.

https://en.wikipedia.org/wiki/R/K_selection_theory


>And yet, as They Might Be Giants put it, these disruptive designs, so seemingly flawed, made the difference between extinction in the cold and explosive, radiating growth.

These designs aren't seemingly flawed, they are flawed. Evolution produces flawed designs by "design".

Evolution trends towards survival. One thing critical for survival is equilibrium of the ecosystem so on a macro level evolution will trend towards equilibrium even if that necessitates the need for flawed designs.

In fact, flawed designs are critical to ecosystem survival. It works because competing designs to the flawed design are also just as flawed.

An invasive species on an island ecosystem is a perfect example of an over efficient design destroying a local ecosystem and in turn eventually destroying itself.

This is why ebola didn't spread as far as covid-19. Ebola is too efficient at killing thus reducing transmission rate and burning out the local ecosystem and thereby destroying itself before it got too far. The less efficient virus is the one that spread across the globe.


The fact that Ebola kills its host is actually a mal-adaptation. Killing the host is counterproductive, as then the virii in that host die as well.

The ideal virus is one that easily spreads and does no harm to its host, or even improves the survival of its host. Our bodies are full of these organisms, like our gut bacteria.


You could say Ebola is designed to live in another more competitive ecosystem where it’s abilities are rendered less effective thereby maintaining an equilibrium. This ecosystem is the illusive reservoir that epidemiologists are looking for. The suspicion is that this reservoir is bats and that bats have an immune system that can easily pacify the virus.

When the virus is introduced into humans it arrives as an invasive species overloading the habitat and destroying itself.


Yeah. SARS-CoV-2 has proven to be about near the perfect virus this year.


Not really, as the body kills the virus or the virus kills the body and then dies with it.


Poor choice of words on my part. When I said "flawed", I meant "unworkable, and less fit for survival than the competition"


Can you give a definition of "flawed" that doesn't make everything flawed?

An irresistible force can't move immovable objects and vice versa.


I think one reason for the negative comments (speaking as someone trained as a Systems Engineer, with 10+ years of experience and now working as a Consultant and performing Systems Engineering occasionally) is that the majority of the "Engineerings" choose to quantify anything fuzzy in a certain way, and then state that it doesn't matter. For example, the human factor. Whereas the reason Systems Engineering is so abstract and high level is the assumption that those unquantifiable factors have to be accounted for in "boxes and arrows", regardless of the fact that they can't be quantitatively engineered.... And many can't handle the fact that these things can't be quantitatively engineered [Edit to add the statement post ellipses]


As a heavily 'left-brained' person with an engineering background, I've recently (in my middle age) started to appreciate those who pursue social sciences more. They were a group I used to look down on because of the lack of rigor in their work, but recently I have come to realize that the problem is that the atomic unit of their domain (a human mind) is orders of magnitude more complex than that atomic units of all my favorite disciplines (physics: forces, chemistry: actual atoms, programming: instructions). While people like myself dismiss the problem (read: run away from it), they at least possess the fortitude to attempt it, even while stumbling.


Personally, I still dismiss soft sciences a lot, despite having the same realization you have. Yes, the subject domain is much more complex than in physics or chemistry, which means it's much harder to apply any sort of rigor, and much harder to get an actual result. But that shouldn't be an excuse to stop trying! As the field gets more difficult - softer(!) - I'd expect the work to be done with much more care and at much slower rate. Instead, I see that the quality of work has been sacrificed. The replication crisis happened not because soft fields were difficult per se, but because the work done was sloppy. It is my impression that the soft fields don't respect the difficulty of their subject domains.


> which means it's much harder to apply any sort of rigor, and much harder to get an actual result. But that shouldn't be an excuse to stop trying!

By this comment, if you program, I sincerely hope you've never written a buggy line of code in your life. As well I expect you have fewer bugs per line of code as your programs get larger and more complex.

As well I'd expect your code experiences the opposite of "code rot" the more changing requirements you receive.


Of course I make bugs! But I strive to make less of them.

For instance, after I've reached programming maturity (i.e. the stage at which I'm not attached to a specific language, but are proficient in several, and can do basic work in any language you ask me to), I've learned to appreciate and make use of strong typing, precisely because it lets me add more (compiler-checked) rigor to my code. Even my Common Lisp code is as typed as the type system will allow me to (which is quite much). Programming this way is still difficult and fuzzy, but a little less fuzzy. Same with embracing certain styles and patterns of programming, that all help to narrow down the scope of things any piece of code could be doing at runtime.

I stand by my point: the more difficult your field is, the more care and rigor should be used in approaching it, if you care at all about making some kind of definitive statements about something that can survive the test of time.


>But that shouldn't be an excuse to stop trying!

I don't see much indication that people aren't trying.


The lack of rigor is pretty prevalent in life (legal systems, human communication, etc.) and it’s awesome because it adds a whole extra dimension of complexity that could not exist if things were rigorous.

For example, I can test whether my friend is sad about something without explicitly asking them and thus triggering them.

Or I try to see if someone is interested in me without explicitly asking and making it awkward for either of us.

Or a judge can choose to be more lenient to someone trespassing to save their dog.

It adds this huge “gray area” that can exist when things are wishy washy and it’s like the lubricant that smooths out unique situations that can’t possibly be covered in rigorous system.


Kind of the „common sense“ part, always prone to be hijacked by corruption


Do you think it makes sense to have formal training for Systems Engineering?

It relies a lot on experience, intuition, and dealing with uncertainty. The uncertainty makes it rather impossible to develop an intuition. To me that suggests that it should be learned rather as an apprentice where the master substitutes as a quicker feedback cycle compared to the real world where feedback comes back years later.

Only when we understand the field well enough that we know what the deliberate practice for training should be then it makes sense to turn it into a course.


That's an awesome question. I think that some aspect of systems thinking (such as defining requirements, or breaking complex problems down at a high level) should be included in all engineering disciplines, but I agree that it's definitely a field more suited to being something you transition to instead of starting in. Mostly just because of the volume of general knowledge required to understand the whole system


I’m shocked by how aggressive a lot of the comments in this thread have been. This is a well written article, regardless of where anyone stands on questions like whether to rewrite vs improve, build vs buy, etc.

Questions of systems design are anyway not answerable by appeal to a formal theory, at least not yet, so like most gray questions of life, they are not ones where one should take a fixed stand. There are too many unpredictable or emergent effects of an answer to such questions.


There's only one comment that addresses your point. The rest of the comments actually agree with the article.

Either way I'm more arguing against the plethora of endless informal exposes into design and design metaphors that I see on HN everyday. These things do little to move the field forward no matter how eloquently they're written. The endless flat circle continues to rotate.

It's like an overdone hollywood genre.


Very nice article. I always struggled with design, not because I didn’t know how to, but the feeling of uneasiness that comes up when we don’t have the details or imagination to see the repercussions of our designs. I’m stuck as a senior engineer, even though I have designed and implemented systems, because I think the veneer of design knowledge when talking to others is pretty thin, and as someone who reasons by causality and simulation, it’s hard to hide it. I always wonder how people get around this. What books did you read, general ones, not the ones you use for interviews that crow “here’s how to design twitter”


There is no external source of truth for complex or wicked design problems (other than the intended use or market). The best you can hope for is a library of stimulating reference works, exploratory processes, personal habits and muses to assist. It has been said with some truth that successful design tends to iterate between broader and highly specific contexts, and utilizes an iterative approach to solutions rather than attempting to successfully anticipate all requirements with one initial design. The author of this piece seems to be using "systems design" as a synonym for the broader perspective design thinking, both at the level of emergent system properties and commercial implications, and while he raises some good points it seems a bit shotgun in focus and the long-form article format a mismatch for the subject matter. I have a library of largely design-related fortunes which I start my terminal with, they are useful to me for design thinking stimulation purposes and are optimized for brevity. Many are derived from books or papers. https://github.com/globalcitizen/taoup


> Joel Spolsky wrote Things you should never do, part 1 about the company-destroying effect of Netscape/Mozilla trying this. "They did it by making the single worst strategic mistake that any software company can make: they decided to rewrite the code from scratch."

I'm not convinced. They only show a couple of examples, but there must also be counter-examples.


I've been involved in a number of attempts to rewrite systems that ticked all of the boxes of Mythical Man Month's second system effect.

The main things I've learned over 40 years:

* Rewrites will fail. Whether they're called that, or called "lift and shift" or called "refactoring"

* The reason they fail is because the focus is entirely on the technology, not the users and how they use the systems

* Understanding business systems and users and how they interact is vital. Business processes haven't changed, accounting hasn't changed, company hierarchies and competitive markets haven't changed.

* What has changed is that where people used to be the "API" for a process touchpoint, now that is likely to be another system, and will have its own API which expresses how it sees the process. Trying to "force" those systems to interact with yours in the way you want is a fools errand.

* Abstraction is really important when defining business domains. "Sales" vs "Fulfilment", "Payment" vs "Charging", "Data" vs "Information", etc.

* Software can't solve all of the possible implementations of the business domain abstractions. You have to set constraints to limit the number of "configuration" elements. Things like "Sales are always retail to end user consumers so that we don't have to deal with VAT exclusive invoicing".

* Those constraints need to be agreed to by the product owners/managers of the domain. Otherwise, every potential requirement will add yet another "knob" to the control of the domain, yet another testing path etc.


I've been involved in rewrite-from-scratch projects that worked, because the scope of the system being rewritten was relatively small. So, I've concluded "rewrite from scratch" projects get a bad rap because they tend to also be very big, nebulously scoped projects, almost by definition. But it's totally unrelated to the fact that they're rewriting a codebase, and everything to do with the size and scope of the project to rewrite a codebase.

Either way, though, we're lead to similar conclusions as the article, regarding refactoring. If a codebase/system is too large to rewrite in a single project, you have to break the work into incremental steps. Theoretically the only difference between "refactor" and "rewrite" is the scope of the affected code.


Most writes we do as engineers are rewrites. The only difference is whether we're rewriting a statement, a function, a module or an entire system. You are correct. It is the size of the rewrite that matters.


To put it pithily, when "the problem has moved on from the solution", a rewrite might make sense. What I mean is that the existing software has existed for long enough, without incremental upkeep and rework, that the problem it was originally solving has evolved to the point that the existing software is not only inadequate, it's wholly inappropriate.

I'm part of one of those rewrites right now, and while it's true that it's taken much longer and had other ancillary issues, it's clear that the problem really has moved on, and there are needs that can't be addressed without a fundamentally different solution. It's kind of trading a hard problem for a harder one: we're forced to really understand the domain we're serving, rather than simply reimplement a set of existing capabilities. (But then, I don't think maintainable, evolvable software can reliably be created without understanding the domain to begin with.)


> I'm not convinced. They only show a couple of examples, but there must also be counter-examples.

A couple of counter-examples that come to mind are Windows (with effectively a 15 year long rewrite that required selling both lineages in parallel while keeping the compatible enough) and Mac OS (when transitioning from classic to OS X).


The original book is about an IBM mainframe OS that was a second system effect project, but which eventually succeeded.

The point isn’t that every rewrite fails, it’s that it makes things really painful and is rarely worth it. The Mozilla rewrite, Windows NT transition, and MacOS X transition all succeeded too, just painfully.

Thus, a counterexample wouldn’t be a complex rewrite that succeeded or failed; it would be one that was surprisingly easy, painless, and shipped on time, with the expected feature set.


In the article you wrote

> As for solutions, there isn't much to say about the second system effect except you should do your utmost to prevent it; it's entirely self-imposed. Refactor your code instead. Even if it seems like incrementalism will be more work... it's worth it.

That seems to be a misleading conclusion? Maybe the moral is rather that you will probably vastly underestimate the Big Rewrite. If it is still worth it at 10x the estimated effort, you should still do it though (WinNT, OSX, etc). That raises the question if we could improve about estimating such Big Rewrite tasks.

Following that thought, the chicken-and-egg problems collapses into the second-system problem. You need to rewrite incrementally in small steps but it only pays off once you completed all the steps (or at least most of them). Until then you just introduced inconsistency into your codebase. You left a local optimum to go for another one and naturally it gets worse before it gets better. You better plan ahead how to keep moral and management support big enough to cross that gap.


I don't understand what you mean by "original book". The comment I was replying to was referring to Joel Spolsky's blog post, which does not talk about IBM mainframes. It also does not merely claim that rewrites are just painful. It ways they are the "single worst strategic mistake that any software company can make".

The counterexample is thus not rewrites that were painless. It's rewrites that were strategic successes, which clearly Windows and Mac OS were. They are, after all, the foundation of two of the world's most successful companies.


Windows is not the same as Mac OS.

Microsoft was a software company. Other than accessories like keyboards and mice, they sold an OS to OEMs and applications that ran on the OS. They suffered through Windows NT as a transition and the Vista debacle. Without their monopoly market, those could have ended the company.

Apple was a hardware company. Other than MacOS and a few applications, they sold hardware that ran that OS. Their focus was on selling more hardware.

Windows had to maintain compatibility because the OS is the base, there isn't anything lower down. If they lost compatibility, then the argument for running Windows was reduced.

MacOS had to enable the new features of Apple's hardware. If it didn't, then there would be no demand for the new hardware. So the transition of MacOS from M68K to PPC to x86 to M1 is necessary to enable the hardware to advance. To Apple, it's a cost of introducing more advanced hardware features.


> I don't understand what you mean by "original book"

The Mythical Man Month.


It's telling that 20+ years later Netscape/Mozilla is still the go-to example of why rewrites are a bad idea. That gives us basically three possibilities:

1. Nobody is doing rewrites from scratch anymore. That's just not the case.

2. Nobody is talking about all the failing rewrites causing entire companies to collapse. I don't think anyone is convinced by this argument, it's basically several big conspiracies in one.

3. People have learned how to avoid the same traps, by reading about the Netscape/Mozilla fuck-up.

Basically, the problem isn't rewrites. It's badly managed rewrites.


4. Name-recognizable companies are typically bloated enough that they can absorb the damage from a failed rewrite without it being clearly responsible for their collapse.

5. Nobody is admitting to doing rewrites from scratch anymore.

6. Netscape/Mozilla became (due to path-dependence and happenstance) the canonical example, and no other examples have been interesting/memetic enough to displace it.

7. People have learned to keep failing rewrites dragging along until something else causes them to be abandoned, sparing them the label of "failed".

8. Other.


exactly. I can tell you from experience that one of the companies I know had consultants out of texas convince them that rewriting their bread and butter ecommerce system was the way to go instead of improving the older one which mostly worked but had some bugs now and then which could be resolved by their existing internal development team. This was a real disservice that cost the company a lot of time and money on something that had to be replaced again in a year after it was "finished" just so the consulting company could make money and give their developers something to do.


Python 3, Angular 2, Polymer 2. Not all full rewrites, but breaking changes with lots of the pain from a rewrite.


I would add - those are kind of monolithic style app platforms. Big, complicated, installed software.

Most software is services oriented, and it evolves, piece by piece over time.

Is Facebook running any of the same code from 18 years ago?

Probably not?

When did they re-write? They didn't, they just evolved, piece by piece.


> 1. Nobody is doing rewrites from scratch anymore. That's just not the case.

This claim would be more compelling if it provided a few notable examples.


Kubernetes is the biggest and worst second system effect I’ve seen in years.


Is it an instance of the Second System Effect if you build something slightly different?

Google is replacing its internal systems all the time. For example, MapReduce was replaced with Cloud Dataflow [0].

[0] https://www.datacenterknowledge.com/archives/2014/06/25/goog...


I worked on one rewrite of an old system which I can't really talk about, but which was very successful (the only thing we kept was the data). I also remember reading many years ago that every major version of Windows is essentially a rewrite from scratch.


We just deployed rewrite of Hadoop (sqoop, hive, ...)/Teradata based Data Lake & Data Warehouse to Cloud based Serverless Lakehouse with Spark as the main workhorse.

This included migrating about 1/2 petabyte of old data and rebuilding several Kimball-style Data Marts to Data Vault approach, which pairs really well with the Lakehouse-style tech stack.

This all took about 9 months, which is a long time, but it was in highly regulated industry where compliance, security and data integrity get a lot of scrutiny.

In the end, nothing of the previous solution survived - no code and no infrastructure.


I have one counterexample but it's a much smaller project: a bespoke ecommerce platform with somewhat tenuous integration into decades-old Oracle EBS backoffice systems was successfully replaced with a heavily customized open source ecommerce platform, with much better backoffice integration.


One thing that changes the calculus, (at least for business applications), is the advent of "low code" platforms, which significantly change the economics.

If you've ever used one, you can see how quickly (and iteratively, which is key) you can spin up new functionality.


The ARM vs Intel story of leaving the low margin behind to focus on the high margin reminds me of USA vs China:

https://www.bloomberg.com/news/articles/2020-12-26/covid-fal...


The innovator’s dilemma doesn’t apply to USA vs China; it’s not like the USA would want to become a developing country again.

Besides that, seven years is a long time: China obviously has favourable characteristics in terms of population size, but it also faces enormous challenges and there are no guarantees, e.g:

https://www.chicagotribune.com/news/ct-xpm-1995-04-10-950410...


If you think Systems Design is bullshit, you haven't played enough Factorio.


I prefer to watch Nilaus do it on YouTube/Twitch, because his job prior to becomming a full-time YouTube content creator was leading an engineering organization, and got his degree in "Transport and Logistics".

He understands the problems Factorio is presenting, and the "state of the art" for how to solve them, and it's a ton of fun to watch what he focuses on, what gets backburnered, etc.

I for sure recommend his channel if you like to see Factorio as (IMO) it's supposed to be played.


One of the funny things about the Intel/Arm situation is that in many ways, it mirrors how Intel itself started out on low-end microcontrollers and microprocessors for PCs, and grew to displace pretty much everyone else for the high end supercomputers, data centers, and the like. For many years, it has been fairly apparent that the same thing was happening with Arm; when Arm thoroughly cemented its position on top of the smartphone market, it seemed pretty clear that they would eventually be able to push up into the laptop market and beyond, the main question was when.

Unfortunately for Intel, they've been hit by a couple of major blows in quick succession; Spectre and other microarchitectural vulnerabilities, and the fact that the mitigations wiped out a lot of their architectural advantages, their trouble with manufacturing at new process nodes, allowing AMD to take the lead from them on high-performance x86, and Arm finally catching up with them in the desktop/laptop space, and also starting to make some small inroads in the server space (though mostly on cost, not performance at the moment).

One thing which may have made them more susceptible to the innovators dilemma problem is the fact that they were already a victim of the second system effect with their Itanium failure. They poured a lot of resources into a major new ISA, and it was a flop and was replaced by the incremental x86-64. Despite that, their incremental improvements on the x86 architecture managed to wipe out all of the RISC competitors on the high end; so there's probably even more organizational oppositions to major new architectural changes which could have kept Arm at bay.

I think it's interesting that it has taken this long for RISC to really pay off. The whole idea behind RISC was to optimize the ISA for modern, pipelined, OoO architectures, but it turns out that for a long time, a bit of extra complexity in the instruction decoder wasn't that much of a price to pay, and things like Moore's law and other architectural improvements kept it mostly irrelevant as a competitive advantage.

But finally, Intel has run out of running room on process dominance, and other things have happened like hitting more power limits and software that's able to take advantage of more available threads and relaxed memory models. Once you start to try to run more threads, the extra cost and complexity of those decoders starts to add up, and the strong memory ordering constraints mean that your hands are a bit more tied on huge OoO dispatch.

Anyhow, Intel is still the biggest semiconductor company in the world, and they have weathered other major issues like the famous Pentium floating point bug, Itanium, and Spectre. It will take a lot to dethrone them, and with their revenue and market share, they can lose a lot of battles and invest a lot of money into new technology or companies before they lose the war.

But the last few years make it clear that they need some big wins soon, or they risk turning into the next IBM, slowly losing relevance as they milk their cash cows while the world moves on.


the big tech company he refers to is google


In my experience, "systema design" has nothing to do with climbing the career ladder, except in a narrow sense of "architecture astronauts claiming credit for 'designing' the work their teammates actually designed and implemented. Most people get promoted by inventing useful or at least measurable new tech. building a horrible system doesn't hurt anyone, since text debt doesn't collapse the project until after promotion.


The problem with "systems design" is the lack of formality and the excessive use of conjecture. Nobody has a problem with Boxes and Arrows if those boxes and arrows have formal definitions (See category theory). The problem arises when those boxes and arrows represent vague unproven concepts with fuzzy meanings.

For something to be more legitimate it needs to be formalized into a fixed set of axioms and formal logical rules. Relativity is an abstract concept but the rules it is based on are formalized. When nothing is formalized then nothing can be formally proven then the whole field descends into "design arguments" where examples and philosophical conjectures are thrown all over the place with no one able to really prove why one design pattern is better than another design pattern.

The problem with this article is he never goes into these formal definitions of what a system is nor does he derive theorems. He just goes into example after example. His "revolutionary" thought here is that systems design may point to some underlying theory of design that applies to the world outside of computers. Big deal.

We've seen these concepts many times but in different forms. "Design patterns," "Architecture" "System Architecture" and the industry just travels in circles around all of these concepts repeating the same old bs time after time never converging on definitive answers. The lack of formalization prevents us from truly knowing what an optimized design is so we just guess.

Let me put it to you this way. Anytime you see the word "Design" it refers to a field where we have little knowledge of how things truly work.

Nobody "designs" the shortest distance from point A to B. That value is calculated through formal theory. However we can't fully understand the best way for a "human" to get from point A, in California to point B in Colorado. We're simply not advanced enough to come up with a formal theory that can optimize the individuals preferences, time, space, energy, speed and comfort. Thus we "design" redundant solutions and hope that with each iteration in "design" we converge closer and closer to an optimal solution. But there's no way we can ever formally know if one solution is "better" than another.

Design is another word for random intuitive guess. It is only deployed for problems where an optimized solution is unknown. "Design" represents an intense lack of knowledge. Anytime you see the word "design" or "architecture" you need to realize that you're descending into a field or talking to an "expert" where nobody knows anything really.

As a result, the entire field of design travels in circles. Most new programming languages or frameworks are a useless attempt at producing an optimum solution but ends up being just another iteration of the endless circle. Is golang the optimum programming language? can we ever truly know if it's better? Are we truly pushing the field forward?

I would say that for languages like typescript where the error surface area is provably smaller then JavaScript or Rust which also has a smaller error surface area then C++ that we have moved forward at getting closer to an optimum solution, but overall for things where a formal theory does not apply (type theory applies in the previous two examples) we aren't really converging on anything.

The field is rife with "fake" formalizations and fake legitimacy. If someone has the job title "System architect" or "System Designer" you know it's complete BS. The guy is just making stuff up from intuition and experience, there is no formal science here. He is much closer to an "artist" then he is to a "mathematician", physicist" or "scientist" and should not command the same respect as the latter titles. The lack of formal legitimacy leaves a lot of room for BS. A good number of people achieve these roles simply by playing politics because there's really no way of knowing if the "architectures" they propose are inherently correct or "better"

The endless circle is very apparent on HN where you just endlessly see new "Design" blog posts on the latest metaphor that can be used for "design." In this case the "design" is "systems design" and the metaphor is the "real world." Why do people endlessly write articles on new design metaphors? Because "design" indicates a lack of so much knowledge that it's really all they can do... make comparisons and metaphors rather then calculate actual optimum solutions.


The guy is just making stuff up from intuition and experience, there is no formal science here. He is much closer to an "artist" then he is to a "mathematician", physicist" or "scientist" and should not command the same respect as the latter titles.

Perhaps some artists should command greater respect among scientists than they do, and perhaps science and engineering are closer to art than they want to admit.

I too used to believe that knowledge was only real if it was formalized, and that intuition is fake and inferior. Now, while formalized knowledge is easier to record and convey, I realize that it is intuition that does the vast majority of the work in most settings. The ability to convey and gradually formalize intuition is indeed a mark of intelligence, but it is not necessary, sufficient or even desirable to wait until all intuition is converted to formalism before acting.

I think you are missing the facts that everything we do is implemented by humans, and that we are working with enough complexity and emergent behavior that we have to rely on intuitive pattern matching. All the formalism in the world means nothing if it doesn't change the way humans implement something, or if the problem has completely changed by the time a formal solution has been created. In my own professional work, as acknowledged by coursework I have had in the past, the bulk of real value has not come from, say, learning the math governing circuits, or learning new programming languages, but from building and applying intuition about systems of interconnected parts.

So if using semi-intuitive, semi-amorphous pattern matching to describe things helps humans deal with complexity sooner and faster than waiting for absolute mathematical formalism, then that is incredibly valuable to society and to the people who develop and apply improved intuitions.


>I too used to believe that knowledge was only real if it was formalized, and that intuition is fake and inferior. Now, while formalized knowledge is easier to record and convey, I realize that it is intuition that does the vast majority of the work in most settings. The ability to convey and gradually formalize intuition is indeed a mark of intelligence, but it is not necessary, sufficient or even desirable to wait until all intuition is converted to formalism before acting.

I never said all knowledge is invalid if it's not formalized. Far from it. I gave an example of a human getting from point A to point B. Clearly we have no choice but to rely on informal solutions when devising a "design" for aspects of out lives that are not open to formal treatment.

Overall My point is two things:

1. Often people don't know the difference between intuitive knowledge and formalized knowledge and attribute way too much legitimacy to "systems design." People think of these guys as scientists when they're really just making stuff up.

2. Informal intuition can as a result devolve into a trap where we endlessly come up with new frameworks, new metaphors and new "designs" without improving anything because there's no formal way of verifying what is optimal. I am arguing that the author of this article and articles like this is just the latest iteration of an endless circle. He introduces nothing new and instead provides a new metaphor for us to ponder about. Literally read it. Did anything change did you actually learn anything new? Was anything actually improved? No. And this blog post is one out of multitudes of "system design" and "system architecture" blog posts that have led to no real insight on anything. It's not just the blog posts either. The post is representative of "systems design" in the entire industry... endless circles of going nowhere.

The best example of the above two points is monolith vs. microservices? Which is actually better? What are the actual tradeoffs here? Nobody really knows, hence the reason why the concepts have oscillated in popularity and the resulting non-answer people come up with is: "Depends on your use-case." And still you get "experts" who pretend to understand this stuff as if it's legitimate "theory."

Let me give you another example. I can make up one of these "theories" and metaphors on the fly while offering zero substance.

You know how the fact that current architectures at most companies involve one giant monolith that does most of the work with a bunch of microservices surrounding one big monolith? It's no longer a choice between either monolith or microservices... everything is a hybrid now with a monolith in the center and satellite microservices orbiting the monolith. I call it the "orbital architecture theorem" which is basically a principle that says that architectures have a tendency to develop centralized and distributed layouts simultaneously in nature. It's similar to the ways planets and corporate organizations work .... blah blah blah. Come on man... you see where I'm going here with this BS? The entire industry is acting this way. It's endless circles of metaphors, BS theorems and laws and architectures that prove or show nothing.

The industry is ripe for formality in this area as we've been going in circles for decades. We are dealing with computers here: virtual idealized worlds ripe for systematic formalism. We don't even need scientific experimentation... just raw mathematical logic as there's no need to put a computer in a wind tunnel to measure unexpected aerodynamic effects.


> the resulting non-answer people come up with is: "Depends on your use-case."

> We are dealing with computers here: virtual idealized worlds ripe for systematic formalism.

I disagree with the premise. Generally, computers are the tool, not the product. The primary focus ought to be on the problem we're solving, not on the particular tool we use to construct a solution.

You're right: design patterns are couched completely in terms of the software domain. Just knowing a handful of design patterns doesn't help you understand when or how to apply them. When you confuse the map for the territory, design patterns look like the tool, rather than a particular example of how the tool is used. No doubt, that's problematic.

But I think you're under a similar illusion: that the computer _is_ the problem to be solved.

The practice of software engineering sits at the nexus of (at least) two distinct domains: computer science and the problem domain, whatever it may be. Software architecture is fundamentally about understanding the shape and structure of the problem domain, understanding the capabilities of your engineering tools, and creating an architecture where both domains support each other. Your overall system architecture should flex where the problem domain flexes, and may be rigid where the problem doman is rigid. (It really does "depend on your use-case".)

This kind of architecture isn't really about the computer. It's about the human developers and maintainers who have to inhabit this space for years, possibly decades. You have to think about the human factors, and understand (as best as possible) the problem domain you'll be working with. (And you have to be willing to tear it down once, finally, the problem has evolved sufficiently away from expectations.)

Architecture is just a disciplne of making your life suck less later. It's not an easy discipline, and (clearly) not a well-understood one. But it's also not formal in the same way computer science or mathematics can be.


> Software architecture is fundamentally about understanding the shape and structure of the problem domain, understanding the capabilities of your engineering tools, and creating an architecture where both domains support each other. Your overall system architecture should flex where the problem domain flexes, and may be rigid where the problem doman is rigid. (It really does "depend on your use-case".)

Can I just say that I really like this framing, I think it succinctly captures why you see such a broad range in languages/frameworks and why there hasn't been one software solution that fits most problems.


Except the framing is bad, in my experience.

For any non-trivial software, the architecture quickly becomes 10% about problem domain, and 90% about internal code bureaucracy. The latter is something that's to a large extent problem-independent, and yet we're still spinning in circles about it for some reason.

(By internal code bureaucracy I mean the art of structuring and connecting all the abstractions in your code, the coding of pathways the data travels and transformations it undergoes, etc. Ever had a situation where a seemingly simple feature required you to touch half of your system, as you routed and transformed a piece of information from a place that produced it to the place that consumed it? That's code bureaucracy.)


It captured nothing in my view.

Even a formal solution won't necessarily have a single solution for all problems.

Likely there are several optimal solutions given for each problem that exists. What formality will tell us is whether or not that solution is definitively optimal.

Right now our definition of an "optimal design" is whoever wins the "design argument" flame war.


I think you'll find this interesting. It's a bit random but in another part of this thread your name appeared out of pure coincidence from another poster:

https://news.ycombinator.com/item?id=25557413

In the github thread (started in 2013) mentioned by pcen, you were part of the big debate on bringing more formalization to javascript promises. You were on the winning side (at the time) of NOT doing this. Your comment here: https://github.com/promises-aplus/promises-spec/issues/94#is...

Likely you muted the thread since then but the conversation continued.

Now in 2020, times have changed and Typed javascript is not the dominant paradigm. However, the decisions made by you and your brethren back in 2013 continue to echo through eternity. The final comments in that thread (dated 2019) sums it up perfectly:

https://github.com/promises-aplus/promises-spec/issues/94#is...

and also:

https://github.com/promises-aplus/promises-spec/issues/94#is...

This github thread was brought up to me as an example about how my Current thread is representative of history repeating the same mistakes. A decision to ignore formal theory in the past now results in a fundamental mistake that all JavaScript developers must live with.

The irony is that you, Jonathan, are part of both histories, you made a mistake then, the question is are you repeating a similar mistake now?

Regardless of whether you're right or wrong now, and disregarding the previous (minor) verbal scuffle we had... I thought you'd be interested in this tidbit of serendipity.


As an addendum, you may like to see what my thoughts are on formalizing the modularization of software systems. As noted elsewhere, I frame it in terms of concurrent systems.

https://news.ycombinator.com/item?id=25567740


Ah, yeah, I reminisce on that thread every now and then. No, I don't have it muted. I was one of the more moderate voices, as I recall. (See my sole subsequent post, where I expressed interest in seeing the approach come to fruition, to help us understand what place it should have within the promises ecosystem. The thread was raging quite loudly by then, and as a much-junior developer, I stayed quiet after that.)

At the time, I had only just started learning Haskell, and while I really liked what monads brought to the table, JavaScript wasn't in a place at the time where it was especially common. With hindsight, monadic promises was perhaps the earliest step toward taking FP seriously in JavaScript.

>> You recall, perhaps, how electromagnetism and the weak force are actually part of the same mechanism, but only at very high temperatures? Well, we simply can't turn up the furnace that high in Javascript. Certain formalisms work beautifully in Haskell but fall flat in Javascript without support from the language. Haskell's static typing lets you offload a ton of work onto the compiler, but Javascript doesn't help much at all!

Leaving aside how embarrassing my demeanor was, I still believe in the core idea here. You want to use a language that doesn't fight what you're trying to do with it.

>> The fmap/bind divide is one of those places; it's just much easier to use the library if you combine the two together.

I strongly disagree with myself here now. This was, as I recall, the crux of the entire debate, and I pretty strongly fall on the side of not auto-flattening since maybe only a year afterward. Yeah, it's "easier", but it's less predictable.

(Even at the time, I was a proponent of making sure `.andThen()` wouldn't run the next step synchronously if the promise was always resolved, because that would make for an edge-case that you'd always have to keep in mind. "Is it resolved yet?". So I clearly had some sense of consistency, but I hadn't had the same realization about map vs. flatMap on promises.)

But back to the present:

> A decision to ignore formal theory in the past now results in a fundamental mistake that all JavaScript developers must live with.

This is why I more or less recused myself from yesterday's thread. I don't appreciate having to tear down a strawman of myself -- I am, and have been for many years now, a proponent of strongly typed, well-structured, indeed formal approaches to software systems. I also see value in an informal, phenomenological stance, because there's no getting around the fleshy humans that have to work with and maintain these systems. I try not to take either to exclusionary extremes.

> The irony is that you, Jonathan, [...]

Everything else aside, it's a bit of a taboo to address someone by a name they are not presently identifying themselves by. It's unearned familiarity.

> [...] the previous (minor) verbal scuffle [...]

For what it's worth, the remark involving "illusion" was intended to be a rhetorical device connecting the example of design patterns to the present topic. It was not meant to be a personal dig. I'll be more careful in the future.


>Everything else aside, it's a bit of a taboo to address someone by a name they are not presently identifying themselves by. It's unearned familiarity.

I mentioned your name to drop a hint at how I confirmed you and the person on the thread were the same person. Should have just been explicit about that. The hint in my mind is more obvious when communication is verbal and face to face but much of it is lost with text. My mistake. Apologies.


I disagree with you and you've made a mistake. You're the one under the illusion. I'll explain.

I know what you're getting at here. You just can't put it in words. And because you can't put it in words, you're unable to see the big picture. To put it plainly you're talking about this:

For a specific context how do we organize and modularize code to maximize reuse-ability for a uncertain future with uncertain requirements?

You think that this problem can't be formalized. And this is basically what you're trying to elucidate into words with your post. It's that simple.

However, this is the exact problem I am saying is ripe for formalization.

This is what I'm talking about and you've missed my point.

Who says we can't formally define what a module is? and who says we can't formally define what it means for a module to be more re-useable? Who says we can't formally define program requirements? From these axioms we can define a calculus that allows us in the best case to calculate the most optimal program and in the worst case at least be able to know whether one design is "better" than another design.

Instead what the industry does is write blog posts about a metaphor. Then give a bunch of examples of why that metaphor is cooler then some other design pattern. Then repeat the same thing every couple of years.

Let me frame it for you so that you can wrap your head around what I'm talking about. Given tic-tac-toe we can formally play the game in a way we can never lose. This is definitely not a design problem and one that can be calculated. Very easy to see why this is the case because of the limited primitives your dealing with in tic-tac-toe.

The "problem domain" defined within a computer is NO different. In computers You have a limited set of primitives in a rule based playing field: assembly language. The objective is not 3 in a row but whatever formal requirements your program has. Within this problem space there is either one or several configurations of assembly instructions that will fulfill that problem domain according to a formal definition of "optimal". That is the spirit of the formalization I'm talking about. A more complex tic-tac-toe problem.

The notion of what it means for an algorithm to be faster has been formalized. So if the problem domain was "how do you sort a list of shoe sizes in the fastest way possible?" then we ALREADY have a formal and definitive way to determine the best possible configurations of assembly instructions to achieve this goal. The problem is solved partially for speed. Picking a faster algorithm is no longer a design choice.

The next step is to formalize the notion of modularity and program organization and bring it out of the fuzzy realm of design and architecture and into a more formal realm where things are exactly defined. We came up with a number for speed (O(N)), who says we can't come up with a number for modularity?

I don't blame you for making this mistake. "Luck" itself use to be a design problem. Intuitively it's very hard to think of the concept of "luck" as a measurement problem. It was utterly incomprehensible for a typical person to even visualize luck as something that can be calculated. It was only after the development of probability theory that people were able to see how it can be done, it is much harder to predict formal possibilities of a topic before the formalization has actually been done.

It's the same thing for module organization. "Design." so to speak

>Architecture is just a disciplne of making your life suck less later. It's not an easy discipline, and (clearly) not a

You can't even define what it means to make you life "suck less." Are you talking about algorithmic speed? Because that's been formalized. Are you talking about less bugs? Because smaller error surface areas have been partially formalized with type theory and Rusts borrow checker...

Are you talking about organization of modules? Because if you're talking about a formal way to organize modules, well that hasn't been done. But don't let that limit your mind into thinking it can't be done.

Especially don't let that limit your mind into falling under the illusion of thinking that all of these endless circles and design philosophies that go in vogue and out of vogue throughout the industry are actually doing something to improve the software as a whole. If you ever wanted a good example of history repeating itself software design is the perfect example.

In simple terms: the discipline of "architecture" is not well understood because it has NOT been formalized. At this point it's borderline fraudulent to even call it a discipline. Call it a career title, sure, but don't try to think of this stuff as anything on the same level as the sciences.


You keep talking about “optimal,” but algorithms and data are only good for optimizing, not for defining what you mean by “optimal” in the first place. Logic and engineering can achieve goals, but they can’t help you choose what goals to achieve.

The real world is big and incomprehensible, and we all have to choose optimization goals based on what limited understanding we have of the big picture - also known as the “system.” There is no math formula you’ll ever be able to find that can do that for you.

Optimizing a known goal, that’s the straightforward kind of engineering, the kind I said in the article maxes out at Senior Engineer. Figuring out what goal to optimize for is messy and uncomfortable, but it’s the only way to eventually solve the bigger problems of the world.

When I was in grade school, math was my favourite subject, because the answers were always right or wrong and you never had to worry if the teacher would disagree with you. It was comforting. It still is. But to grow up and have a bigger impact, we have to move on and learn about uncomfortable things too.

I can tell this bothers you a lot. It used to bother me a lot too. The reason I write articles like this is to explain the things I used to not understand, using words that I hope would have helped past-me understand sooner. Maybe it will help you too. If not, no harm done.


> I can tell this bothers you a lot. It used to bother me a lot too. The reason I write articles like this is to explain the things I used to not understand, using words that I hope would have helped past-me understand sooner. Maybe it will help you too. If not, no harm done.

Thanks for writing this. I too can relate because these things used to bother me too. I like your math being the favorite subject anecdote because it's one I can relate to, and coaching colleagues and reports past the fear of looking like or even admitting that they're wrong has been a much more significant part of my career than I probably ever would have expected. The funny thing is that if you had told me that at the part of my career where it bothered me, I would have not listened. I had to stumble through it to realize everyone (including people like you) who warned me about this was right. I almost wonder if it's a rite of passage for engineers.


Do you still have trouble admitting you're wrong? Your last post to me was a complete troll and the moderator killed your post because of how vile and rude it was. Do you resort to insults when you can't admit your wrong? You certainly did to me.

> I almost wonder if it's a rite of passage for engineers.

Its flawed to think of the world as a reflection of yourself. It's definitely not a rite of passage for engineers. This is more of a rite of passage for you and your fear of being wrong.

Math to me was the same as any other subject. I never had an issue with being wrong or right. If anything english was my best subject. My math skills developed later in university. I have a greater talent for formal math then I do for the applied math they they teach in highschool. But this has nothing to do with any fear of being wrong. I like formal math because I find the philosophical implications of the subject interesting. I derive no other comfort from it and definitely can't relate to your or the parent posters ability to derive comfort from the exactness of the answers. It's just puzzles and solutions I can't derive "comfort" from that anymore then I can derive comfort from a rock.

I think both you and the author of this original article are making misplaced judgements on character. Don't assume that others think like you have the same qualities as you or have the same weaknesses as you. Everytime you make this assumption you're actually revealing more about yourself to others.

Your last post to me. The one that got killed by the moderator was strange. I was sort of curious where all that made up stuff came from. Then I realized none of it was made up. It's just a reflection of your own horrible life. For that I'm truly sorry and I hope you can find a way out of your miserable circumstances.


>not for defining what you mean by “optimal”

This is my point. The path to formalization is finding the exact formal definition of the notion of what we intuitively understand as "good software design patterns." For algorithm speed, we use Big Oh, for other aspects of design, I'm saying rather then create more software design metaphors a more productive use of our time is to formalize and converge on an optimum.

>The real world is big and incomprehensible, and we all have to choose optimization goals based on what limited understanding we have of the big picture - also known as the “system.” There is no math formula you’ll ever be able to find that can do that for you.

Yes except the computer system is an idealization of the the real world. It places the real world into a logical world of formalisms where we can eschew science and use pure logic to draw conclusions.

>Figuring out what goal to optimize for is messy and uncomfortable

Sure. But clearly that task is separate from design patterns. 1. The business "designs" an objective and a goal. 2. The software engineer meets that goal. I am talking about 2. not 1.

>When I was in grade school, math was my favourite subject, because the answers were always right or wrong and you never had to worry if the teacher would disagree with you. It was comforting. It still is. But to grow up and have a bigger impact, we have to move on and learn about uncomfortable things too.

I fail to see how discomfort or growing up have anything to do with the topic at hand. The goal is to converge on the most correct and definitive answer possible.

>I can tell this bothers you a lot. It used to bother me a lot too. The reason I write articles like this is to explain the things I used to not understand, using words that I hope would have helped past-me understand sooner. Maybe it will help you too. If not, no harm done.

It doesn't bother me at all. It sounds like some sort of anxiety disorder if the fuzziness of certain answers bothered you. Human brains are neural nets more similar to the probabilistic (aka fuzzy) outputs produced by our machine learning models hence most people should be more comfortable with fuzzy answers rather then pure logic. Either way, I am simply arguing a point. Your empathy is appreciated but unaccepted, this is not the goal. Again, the goal is to debate a point and find a correct answer.


> Your empathy is appreciated but unaccepted, this is not the goal. Again, the goal is to debate a point and find a correct answer.

Ironically but very relevantly to this conversation, it seems we disagree on the goal. :)


Well I'm not here to talk about my character. I'm only interested in my point. If you want to talk about me, well I can only tell you I'm not interested and neither is anyone else reading this thread. It's sort of against the spirit here on HN.


> The "problem domain" defined within a computer is NO different.

yes, you are correct, a specified problem domain within an implementation can be formalized, proved correct etc.

But that literally has nothing to do with the actual problem, which as Twisol said is about mapping the problem domain and the software domain so that both are supported.

Both hardware and software have moved in well known cycles. Processing has moved from CPUs to external processors (IO, then GPUs etc) and back again as hardware capabilities change. When comms are slow, its better to have two smart ends so that you minimize the information that needs to flow. When comms are fast, you can make one end "dumber".

Your entire premise is at a different level of abstraction than "software architecture" or "systems design". You haven't proposed a mechanism by which messy human problems can be formalized.

Software architecture and design is about dealing with non-deterministic, non-linear, fractal environments and the computer and systems science hasn't provided the theoretical frameworks that would allow the environments to be formalized so that they can be "engineered".

It's definitely not "science", either is it "engineering", but it is definitely a discipline. You are judging one field by the axioms of another and then declaring it invalid because it doesn't comply.


>It's definitely not "science", either is it "engineering", but it is definitely a discipline. You are judging one field by the axioms of another and then declaring it invalid because it doesn't comply.

It's a discipline the same way modern art is a discipline.

>But that literally has nothing to do with the actual problem, which as Twisol said is about mapping the problem domain and the software domain so that both are supported.

I explained to Twisol, that I am talking about the "problem domain." Both domains actually go hand in hand.

Think about it this way. Number theory is a formalization of numbers. A Math problem is in the problem domain that utilizes the formalization of numbers as a method for solving.

>Both hardware and software have moved in well known cycles. Processing has moved from CPUs to external processors (IO, then GPUs etc) and back again as hardware capabilities change. When comms are slow, its better to have two smart ends so that you minimize the information that needs to flow. When comms are fast, you can make one end "dumber".

So your point? I'm not getting what you mean by slow comms, and smart ends and dumb ends.

>Your entire premise is at a different level of abstraction than "software architecture" or "systems design". You haven't proposed a mechanism by which messy human problems can be formalized.

No it's not. It encompasses all levels. Have you ever used formal math to solve a "math problem" let's say accounting. Balancing the books of your finances. This is a "messy human problem" that uses a formalization of something else for solving.

>Software architecture and design is about dealing with non-deterministic, non-linear, fractal environments and the computer and systems science hasn't provided the theoretical frameworks that would allow the environments to be formalized so that they can be "engineered".

wrong. software is mostly deterministic. However, even a non-deterministic model is amenable to theory. Ever heard of probability? Additionally non-linear systems are just not as easily analyzable by certain methods such as calculus. You can still form a proof around a set of non-linear assembly instructions.

>allow the environments to be formalized so that they can be "engineered".

And that's my entire point. There's no point in iterating over the same endless design cycles and repeating the same mistakes over and over again every couple of years with a new framework that doesn't necessarily make anything better. Better to develop the formalism around "design" and optimize it. Another article like the one the parent poster posted is pointless and doesn't move the needle forward At all.

>It's definitely not "science", either is it "engineering", but it is definitely a discipline. You are judging one field by the axioms of another and then declaring it invalid because it doesn't comply.

No I'm looking at the field and seeing that we're going nowhere with endless conjectures and analogies about design. I'm seeing the patterns of today aren't that much different then the patterns from the past. I'm also seeing the software being called "engineering" while eschewing much of the formalism that is part of engineering.

Due to all this I'm saying, that the field is going nowhere and is pretty much a big sham. I'm tired of the pointless flat circle of repeating history. "Systems Design" from the software perspective is therefore ripe for formalization and theory.


I think you're dismissing a critical consideration:

There are systems that are impractical to formalize, either because they are too complex, too poorly-understood, too dynamic, too ephemeral, or too expensive to do so.

Your complaints about the words "architecture" and "design" sound like a cached rant. I agree that there's sometimes an uncomfortable amount of squishiness in the way we build technology-systems, but I don't think that's always a bad thing! By and large, we are not specialists in our sciences, we are generalists working in the emergent systems created by the interactions of formal components. For many, this is the exciting part.

We borrow heavily (terminology and patterns) from building, manufacturing, various artistic disciplines, and from complex systems theory. This is inevitable of course -- but when I think of "systems design", I think more of Donella Meadows (Thinking in Systems) than Morris Mano (Computer Systems Architecture, Digital Design) ... though both are brilliant and responsible for hundreds of hours of my university career! (And the latter is a good example of why your objections to terminology are dead ends.)

Our industry builds squishy complex systems (products, applications, interfaces, software, firmware, some hardware) on top of rigid formalized primitives (algorithms, other hardware, physics). The area we inhabit is fertile ground for iteration, agility, and yes, charlatanism. I can understand why that might be bothersome, but I think it goes back to the old vitality-volatility relationship -- can't have one without the other. You may prefer a different part of the curve of course.

I have friends who graduated with Civil Engineering degrees, and work in their industry. Their projects are extremely formal and take decades to realize. This is appropriate!

I have other friends with Architecture degrees (the NAAB, Strength of Materials kind of architecture), who work in their industry. Their projects are a mix of formal and informal processes, and they take years to realize. This is also great.

Now obviously there's a lot of self-selection involved in these groups, but even still everyone has their set of frustrations within their industry. We in technology can iterate in an hour, or a few days for a new prototype PCB rev. This gives the industry amazing abilities, and I would never trade!


Except I never dismissed this point. The problem with your post is that you assume I categorize design as something useless. I have not. Obviously there are tons and tons of things within the universe where the only possible solution is design. I am not arguing against this.

I am talking about a very specific aspect of the usage of "design" within software. I am specifically complaining about the endless iterations of exposes on software design patterns and architecture. The trends where history continuously repeats itself with FP becoming popular OOP becoming more popular than FP becoming popular again. What about the whole microservices/monoliths argument where monoliths started out more popular than microservices became popular and now monoliths are coming back into vogue again? Endless loops where nobody knows what is the optimal solution for specific contexts.

These are all methods of software organization AND my point is that THIS specific aspect of design is ripe for formalization especially with the endless deluge of metaphor drenched pointless exposes on "design" that are inundating the HN front page feed. It's obviously an endless circle of history repeating itself. I am proposing a way to break the loop for a specific aspect of software by pointing out the distinction between "design" and "formal theory." We all know the loop exists because people confuse the two concepts and fail to actually even know what to do to optimize something.

System architects are artisans not scientists and they will as a result suffer from the exact same pointless shifts in artistic styles/trends decade after decade and year after year as their artisan peers do. Totally ok for styles to shift, but our goal in software is to converge at an optimum as well and that's not currently happening in terms of software patterns and design architecture.

The path out of this limbo is to definitively identify the method for optimization formally, not add to the teeming millions of articles talking about software design metaphors.


Any idea on how to formalize this area? Is anyone even trying to do that?


Personally, I'm nursing a thesis that the study of concurrency is fertile ground for a formalization of modular design. Where parallelism is the optimization of a software system by running parts of it simultaneously, concurrency has much more to do with the assumptions held by individual parts of the program, and how knowledge is communicated between them. Parallelism requires understanding these facets insofar as the assumptions need to be protected from foreign action -- or insfar as we try to reduce the need for those assumptions in the first place -- but I expect that concurrency goes much further.

Concurrent constraint programming is a nifty approach in this vein -- it builds on a logic programming foundation where knowledge only increases monotonically, and replaces get/set on registers with ask/tell on lattice-valued cells. LVars is a related (but much more recent) approach.

A different approach, "session types", works at the type system level. Both ends of a half-duplex (i.e. turn-taking) channel have compatible (dual) signatures, such that one side may send when the other side may receive. Not everything can be modeled with half-duplex communications, but the ideas are pretty useful to keep in mind.

I try to keep my software systems as functional as possible (where "functional" here means "no explicit state"). But there are always places where it makes sense to think in terms of state, and so I try to model that state monotonically whenever possible. At least subjectively, it's usually a lot simpler (and easier to follow) than unrestricted state.

(Note, of course, that local variables are local in the truest sense: other programmatic agents cannot make assumptions about them or change them. Short-lived, local state is as good as functional non-state in most cases.)


> I try to keep my software systems as functional as possible (where "functional" here means "no explicit state"). But there are always places where it makes sense to think in terms of state, and so I try to model that state monotonically whenever possible. At least subjectively, it's usually a lot simpler (and easier to follow) than unrestricted state.

Agreed. You mention LVars so I'm curious what you think about MVars and STM in general. I've always been fond of STM because relational databases and their transactions are a familiar and well understood concept historically used by the industry to keep state sane and maintain data integrity. SQLite is great, but having something that's even closer the core language or standard library is even better.

It's part of why I like using SQL to do the heavy lifting when possible. I like that SQL is a purely functional language that naturally structures state mutations as transactions through the write-ahead log protocol. My flavor of choice (Postgres) makes different levels of efficient read and write available through read isolation levels that can give me up to ACID consistency without having to reinvent the wheel with my read and write semantics. If I structure my data model keys, relations and constraints properly, I get a production strength implementation with a lot of the nice properties you talk about. And that's regardless of my service layer choice for my language that I can trust to stand up.

There's one exception in particular that I've seen begin to gain steam in the industry which I think is interesting, and that's Elixir. Because Elixir wraps around Erlang's venerable OTP (and distributed database mnesia), users can build on the top of something that's already solved a lot of the hard distributed systems problems in the wild in a very challenging use case (telecom switches). Of course, mnesia has its own issues so most of the folks I know using Elixir are using it with Phoenix + SQL. They seem to like it, but I worry about ecosystem collapse risk with any transpiled language -- no one wants to see another CoffeeScript.


I'm not especially familiar with either MVars or STM, so you'll have to make do with my first impressions...

MVars seem most useful for a token-passing / half-duplex form of communication between modules. I've implemented something very similar, in Java, when using threads for coroutines. (Alas, but Project Loom has not landed yet.) They don't seem to add a whole lot over a mutable cell paired with a binary semaphore. Probably the most valuable aspect is that you're forced to think about how you want your modules to coordinate, rather than starting with uncontrolled state and adding concurrency control after the fact.

STM seems very ambitious, but I struggle to imagine how to build systems using STM as a primary tool. Despite its advantages, it still feels like a low-level primitive. Once I leave a transaction, if I read from the database, there's no guarantee that what I knew before is true anymore. I still have to think about what the scope of a transaction ought to be.

Moreover, I get the impression that STM transactions are meant to be linearizable [1], which is a very strong consistency requirement. In particular, there are questions about determinism: if I have two simultaneous transactions, one of them must commit "first", before the other, and that choice is not only arbitrary, the program can evolve totally differently depending on that choice.

There are some situations where this "competitive concurrency" is desirable, but I think most of the time, we want concurrency for the sake of modularity and efficiency, not as a source of nondeterminism. When using any concurrency primitive that allows nondeterminism, if you don't want that behavior, you have to very carefully avoid it. As such, I'm most (and mostly) interested in models of concurrency that guarantee deterministic behavior.

Both LVars and logic programming are founded on monotonic updates to a database. Monotonicity guarantees that if you "knew" something before, you "know" it forever -- there's nothing that can be done to invalidate knowledge you've obtained. This aspect isn't present in most other approaches to concurrency, be it STM or locks.

The CALM theorem [2] is a beautiful, relatively recent result identifying consistency of distributed systems with logical monotonicity, and I think the most significant fruits of CALM are yet to come. Here's hoping for a resurgence in logic programming research!

> There's one exception in particular that I've seen begin to gain steam in the industry which I think is interesting, and that's Elixir.

I've not used Elixir, but I very badly want to. It (and Erlang) has a very pleasant "functional core, imperative shell" flavor to it, and its "imperative shell" is like none other I've seen before.

[1] https://jepsen.io/consistency/models/linearizable

[2] https://rise.cs.berkeley.edu/blog/an-overview-of-the-calm-th...


There's many topics in this area. Ones that are well known in industry are algorithmic complexity theory and type theory. Ones that are less well known include the two resources below.

http://www4.di.uminho.pt/~jno/ps/pdbc.pdf

https://softwarefoundations.cis.upenn.edu

I suggest you get use to ML style languages before diving into those two resources (Haskell is a good choice) as it's not easy to learn this stuff and I think it's also part of the reason why it hasn't been so popular in industry.

The first resource builds towards a prolog like programming style where you feed the computer a specification and the computer produces a program that fits the specification.

The second resource involves utilizing a language with a type checker so powerful that the compiler can fully prove your program correct outside of just types.

Both are far away from the ideal that the industry is searching for but in terms of optimizing and formalizing design these two resources are examples of the right approach to improving software design.

I haven't found anything specifically on the organization of software modules so as far as I know none exists. But given the wide scope of software research I'm sure at least one paper has talked about this concept.


> I haven't found anything specifically on the organization of software modules so as far as I know none exists.

Parnas (1971) is seminal on this topic. https://apps.dtic.mil/sti/pdfs/AD0773837.pdf


Sure this paper introduces the concept of modules in software back when modules were non-existent. As far as I know, there's nothing on the optimal way to organize (keyword) modules.


> Sure this paper introduces the concept of modules in software back when modules were non-existent.

No... that's not correct. The very first non-clerical page quotes from a textbook discussing modular systems, and Parnas himself notes a distinct lack of material on how to actually organize and break down the system into modules.

The paper is literally called "On the criteria to be used in decomposing systems into modules."

It is prudent to at least read the preexisting material if you are going to dismiss it.


My mistake, I skimmed it and saw assembly language and I assumed it was that really early seminal paper that introduced modules to the programming world. Obviously my guess was wrong.


> Your complaints about the words "architecture" and "design" sound like a cached rant.

To be completely fair, my notes were similarly cached, and it was a little poor of me not to respond more directly to the substance of the original post.


Mine wasn't. He's just making that up.


> I disagree with you and you've made a mistake. You're the one under the illusion. I'll explain.

This is a very effective way to make the other party in a debate not wish to continue. I respect that you have strong opinions, but I'm not sure it's worth being right in a population of one.

In all sincerity, I'll leave you with wishes for a restful end of year.


To be fair. You said that I was under an illusion first. That definitely pissed me off a bit. I just subtly returned the favor and forgot about your indiscretion as it's not a big deal to me. It obviously is to you.

You're free to leave and continue at your own discretion.


> but don't try to think of this stuff as anything on the same level as the sciences.

Depends on what kind of sciences we're talking about. For an enormous span of the sciences, serious reproduction and funding crises have emerged to place whole fields on the verge of if not the middle of conceptual and practical collapse (hence the continuing brain-drain from science into tech).

At a deeper level, I think you're making a lot of assumptions that don't seem rooted in reality. Worse still, you don't seem to feel any need to root them in reality -- you minimize and diminish uncomfortably sharp edges in a very whiggish kind of historiography. Why is that? Your conjecture is that architecture is not well understood because it is not formalized. What if you're merely conflating cause and effect, and software architecture has not been formalized because it is not well understood? Then formalization is merely a memento of something else -- something far more primal, something far more deeper. And I think (though only you can say) that you might be afraid of that.

When we learn how to use powerful tools, we want to use them for everything. That includes philosophical ones. What if you're taking it for granted that architecture will /ever/ reach the level of formalization as the hard sciences? After all, there is something a bit hubristic about that. Wouldn't it be ironic to have such a deep faith in formalization so as to ignore (and even invert) the fundamentally unidirectional flow from phenomena to formal abstraction? But that seems to be what you're doing. After all, you have no proof that formalism, as a tool in and of itself, lends any utility to making sense from and building structures for the world.

It is the observations, the data accumulated in enough curious detail, kept preserved through painstaking work via the investigator and archiver, which provide material for abstraction to compact into greater expression: succinctness, clarity, and semantic power. But without that raw material, the abstraction is nothing but window dressing. It is onanistic and serves no formal purpose. And frankly, software as a field is very young by human standards. For it to not have the level of formal abstractions far older human pursuits have is a function of its youth. I think that to love formal abstraction for its own sake is an obscurantism of its own, and this kind of thing has a name: it's called scientism.

Bertrand Russell tried to axiomatize mathematics into logic, and completely failed. And not because he was unintelligent. He failed because the opposite premise was vindicated by reality. What will you do if the same holds true for you here? Even he was so taken and seduced with the elegant possibility of axiomatizing mathematics that he was unable to see the logistical (and finally logically provable) impossibility of doing so. And finally even he had to ultimately give up his endeavor as directly fruitless (although the silver lining was that his work did pave the way for a lot of crucial discoveries). Consider whether you're doing the same thing.


Aside from the context of this thread, you've really eloquently put into words something I've started to believe over the last few years. Please never delete this comment ;) I'm sure to bookmark it.

> It is the observations, the data accumulated in enough curious detail, kept preserved through painstaking work via the investigator and archiver, which provide material for abstraction to compact into greater expression: succinctness, clarity, and semantic power. But without that raw material, the abstraction is nothing but window dressing.

yesssss.


>At a deeper level, I think you're making a lot of assumptions that don't seem rooted in reality. Worse still, you don't seem to feel any need to root them in reality -- you minimize and diminish uncomfortably sharp edges in a very whiggish kind of historiography. Why is that?

What the hell is whiggish historiography? You want me to to come up with some textbook historical account of how software design has moved in circles? I assume it's obvious and it's just generally hard to write about a trend that defies exact categorization. It probably can be done, I just can't spare the effort.

As for your other stuff I respectfully request that you keep this argument formal rather then comment on my personal character. Nobody appreciates another person speaking about their personal character in a negative and demeaning way. You are doing exactly this and there's really no point or need unless your goal is set me off personally and have this argument degrade into something where we both talk about each other personally.

>What if you're merely conflating cause and effect, and software architecture has not been formalized because it is not well understood? Then formalization is merely a memento of something else -- something far more primal, something far more deeper. And I think (though only you can say) that you might be afraid of that.

If it's a paradox then your reasoning could have existing evidence. Are there any attempts at formalizing software organization in academia? Have those attempts been ignored or have they actually failed?

Either way, in the industry, it's pretty clear to me that a lot of energy is spent discussing, debating and coming up with design analogies year after year. In my opinion this is actually a misguided attempt at optimization. People think the latest software design analogy or architecture pattern is going to save the world but it's always just a step to the side rather then forward and that's because most people in the industry don't even know what it means to truly "step forward."

> What if you're taking it for granted that architecture will /ever/ reach the level of formalization as the hard sciences?

Well I'm obviously betting that it can. Either way there's no denying that given a formal definition of a problem, several or a single best optimum solution does exist. In other words There must exist a configuration of assembly instructions that fits a general definition of best solution to a problem. Whether a formal method other than brute force can help us find or identify this solution remains to be seen, but the existence of a best solution is enough for me to "predict" that a formal method for finding this thing exists.

>It is the observations, the data accumulated in enough curious detail, kept preserved through painstaking work via the investigator and archiver, which provide material for abstraction to compact into greater expression: succinctness, clarity, and semantic power. But without that raw material, the abstraction is nothing but window dressing. It is onanistic and serves no formal purpose.

You're conflating science and logic. Science is about observing reality and coming to conclusions based off of observations and the assumption that logic and probability are real. It involves a lot of statistics.

Logic is just a game with rules. We create some axioms and some formal rules and play a game where we come up with theorems. The game exists apart from reality. We build computers on the assumption that logic applies to reality. But more importantly we use computers on this assumption as well.

Thus the computer is in fact a tool designed to facilitate this logic game. The computer is in itself a simulator of a simple game involving formal rules with assembly language terms as the axioms.

That's why formality applies to software and we can disregard science for a good portion of computing.

Scientism has nothing to do with this. You are at an extreme point of misunderstanding here.

>Bertrand Russell tried to axiomatize mathematics into logic, and completely failed. And not because he was unintelligent. He failed because the opposite premise was vindicated by reality.

No he failed because he assumed that logical systems must be consistent and complete. But others have succeeded in saying that logical systems can exist within our logical games, so long as they do not hold the above two properties of consistency and completeness at the same time.

Also the logical premise you are speaking of was not vindicated by reality (aka scientific observations). It was vindicated by additional formal logical analysis by another logician. Reality and logic are two separate things. The main connection in those two areas is that science assumes logic and probability are part of reality. Outside of that assumption the two have zero relation.


> Thus the computer is in fact a tool designed to facilitate this logic game. The computer is in itself a simulator of a simple game involving formal rules with assembly language terms as the axioms.

It's somewhat poetic, beautiful and accidentally funny that your argument frays at the same blind spots that your tone does, which is at the gap between theoretical and on-the-ground meaning. It is true in a theoretical sense that you could describe a computer as a simulator which can be modeled with formal rules. But it tells me nothing about why the computer as a mass-market electronic device which runs mass-market software occupies the societal role it does, nor where it will go, nor why it was an innovation which transformationally altered society in the same way as the flame, the wheel, the chariot, the bow, the written word, the loom, the printing press.

Now, if you're not going to try and engage with any of that because "It probably can be done, I just can't spare the effort" what do you think is more likely:

1) your complaint most loudly voices new and fresh insight to the industry which it had previously missed

2) your complaint most loudly voices your own gaps in curiosity and understanding

If it's 2, then I just think that's sad. I see a lot of engineers (frequently early-stage in their careers and suffering from a bit of imposter syndrome) get stuck in a rut where they dogmatically devour formalism as a vehicle to elevate the level and quality of their engineering to the level of rigor seen in the sciences. But the appropriation of surface level aesthetics masks a methodological inability to do the dirty work to get the actual job done via "less sexy" paths (as reality often requires), to differentiate and harden critical sections when necessary. It's sad because it generally only speaks to their own lack of exposure with just how power and impact software can be built to have, and it takes the momentary weakness of inexperience and calcifies it into eternal inexperience and terminal juniority owing to an unrealistic, radioactive hubris far out of proportion with actual capability.

That could be you. It doesn't have to be.


[flagged]


> So? Then tell me how my gap in understanding is reflected in my POINT rather then in my character.

Not only have I already done so, so too have a bunch of other accounts. Why don't you re-read what they've all said and think about it some more? I'm sure you'll be able to find something more useful to add if you do that.

> So what? Who cares if it's me? How does this matter to you and what does this have to do with the topic?

Because if you do that, you will never fully mature as an adult human being. You'll repel others away from you and then feel the pain of isolation and loneliness. You'll watch others live rich lives you wish you had. I think that is a very unfortunate path. I hope you will avoid that pain for yourself before it's too late and you can't redo all the time you will inevitably kick yourself for wasting. Life goes by very quickly and you don't get chances for do-overs.


This is obviously a troll. We're done here.


Thank you for putting in the good fight. It will take a high-profile level catastrophe akin to the Challenger Disaster for the industry to move on from the current state and formalize even basic things we've known for decades such as category theory and functional programming. And unfortunately, the results will be heavy regulation. Until then, we will just have to watch history repeat through these types of discussions. [0]

[0] https://github.com/promises-aplus/promises-spec/issues/94


Please don't continue this thread. Whether you agree with me or not, it has devolved into personal and character insults by the other poster. You can continue the discussion with me somewhere else in the thread just not under yowlingcat. Blood has been spilled (figuratively) and tensions are extremely high.

Suffice to say, I'm not talking directly about category theory, that's one possibility of formalization (that seems quite unlikely). Other parts that have successfully made it into programming (and have changed programming for the better) including complexity theory, type theory and the Rust borrow checker. Two theories automate correctness and one is helpful in analysis of execution speed. My proposal is about one for modular organization and while category theory is a great candidate, it does have an incredibly high learning curve and to me that is why I agree with you that it is an unlikely candidate. I mean it does sort of exist already as a type checker in haskell... I'm more saying that I don't expect people to actually try to learn CT.

Either way, I believe more formality can be done and we don't have to force the entire industry to use monads and FP (FYI I prefer the style myself). If a formal theory about modular organization ever comes into vogue my guess is that it will exist more popularly as a "design pattern" in the industry rather than a programming framework or formal method. People can choose to use a pattern, or bend the rules... the main point and the crux of my entire effort is to say something along the lines of the fact that we need design patterns that provably improves "design." Without proof most of our efforts seem like steps to the side rather than forward.

If you have a reply, please place it directly under my parent post, not here.

edit: That github link is picture perfect mirror of what I'm talking about in the industry. Thumbs up for bringing it to my attention.


> The main connection in those two areas is that science assumes logic and probability are part of reality.

That's definitely not the case with respect to science, which is empirical and not rationalist. What you're describing in that sentence is Platonic realism, not science.


It definitely is the case.

Science involves statistics, statistics is built on top of of the theory of probability, probability is built on top of logic.

Therefore, for science to utilize statistics, it must assume probability and logic is true.

https://en.wikipedia.org/wiki/Empiricism

Examine the quotation in the first section: "Empiricism in the philosophy of science emphasises evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world rather than resting solely on a priori reasoning, intuition, or revelation. "

The key word here is "solely" meaning that empiricism involves the addition of observational evidence on top of other forms of analysis.

Additionally take this sentence: "Empiricism, often used by natural scientists, says that "knowledge is based on experience" and that "knowledge is tentative and probabilistic, subject to continued revision and falsification"

Note that falsification used here cannot exist without assuming logic is true.


> it must assume probability and logic is true.

True in what sense? I appreciate the rigor you're trying to bring to this discussion. Most of the points regarding logic you raise here and in your other replies on the thread have been comprehensively debunked in Kant's Critique of Pure Reason.

One omission is the nature of human senses and perceptions. The understanding of which eliminates whatsoever any possibility of conclusively distinguishing the statement "logic is true" from "logic appears true to us but it's actual truth status is unknowable". This is just a toy explanation and the Critique is far more comprehensive and rigorous.

> empiricism involves the addition of observational evidence on top of other forms of analysis.

That may be the case in circumstances where it's convenient and other forms of analysis serve the function of useful tools or mental aids. However empirical observation always trumps other forms of analysis when they're in conflict. Otherwise we're no longer discussing science anymore.


>"logic is true" from "logic appears true to us but it's actual truth status is unknowable".

That's just a paradox. No point in going to deep into the paradox as it's unresolvable. At best it can be "Assumed" that logic is true. Replace "assume" with "pretend" it's the same thing. Science pretends logic is true. Whether it's actually true or not is the paradox.

>That may be the case in circumstances where it's convenient and other forms of analysis serve the function of useful tools or mental aids. However empirical observation always trumps other forms of analysis when they're in conflict. Otherwise we're no longer discussing science anymore.

Sure but this can only be done in conjunction with logic. If you observe evidence and the evidence leads to a conclusion the "leading to a conclusion" is done through logic.

Another way to put it is, if logic wasn't real no observation would make sense. If I observed a pig, then pigs exist by logic. If logic wasn't real then the observation doesn't imply pigs exist. By logical analysis, An observation is therefore useless without logic.

Suffice to say at this level of analysis we can clearly conclude that Science "pretends" logic and probability are true. Getting deeper into this dives into paradoxes which are ultimately uninteresting dead ends to me because it's unresolvable.


> At best it can be "Assumed" that logic is true.

What exactly gives you the right to do that? And why is your reason any better than a different one provided by someone else who wants to argue the opposite?

> Replace "assume" with "pretend" it's the same thing.

We've now turned science into dogma. At what point do we then need to become aware that we're pretending? Surely there comes a point where pretending costs us epistemic legitimacy. Where is that point? And what response do we offer to an interlocutor who insists the earth is flat and our logical deduction of it's spherical shape is "pretend"?

> Sure but this can only be done in conjunction with logic. If you observe evidence and the evidence leads to a conclusion the "leading to a conclusion" is done through logic.

> If I observed a pig, then pigs exist by logic. If logic wasn't real then the observation doesn't imply pigs exist.

Both of these statements are nonsensical and are debunked in the Critique of Pure Reason.

> Getting deeper into this dives into paradoxes which are ultimately uninteresting dead ends to me because it's unresolvable.

I have a feeling we're dealing with a small bit of motivated reasoning with respect to interestingness here.


>What exactly gives you the right to do that? And why is your reason any better than a different one provided by someone else who wants to argue the opposite?

Why use science if it won't work without assuming the principles it's built on are not true? We assume science is true, we established that logic cannot be established to be true. Thus if logic cannot be established to be true, then science cannot be established to be true, then why do we use science?

The only other conclusion is we "assume" science is true and therefore "assume" logic is true even though we can't truly know if it's true.

>We've now turned science into dogma. At what point do we then need to become aware that we're pretending? Surely there comes a point where pretending costs us epistemic legitimacy. Where is that point? And what response do we offer to an interlocutor who insists the earth is flat and our logical deduction of it's spherical shape is "pretend"?

I'm not a philosopher. I'm not into epistemology as I'm not even entirely sure what it is. SO if you dive into that world too deeply the argument is over because I can't argue with something I don't know about. Either you explain your points in layman terms or the argument can't proceed very far because I won't be able to understand you.

I'm just saying that "pretending" is the same thing as "assuming" We don't actually know if something is true, but we still use science as if it's true. The contradiction is what allows us to use the word "pretend" we know that it cannot be known yet we act as if it is known. Hence "pretend"

>Both of these statements are nonsensical and are debunked in the Critique of Pure Reason.

Well declaring a statement nonsensical doesn't mean anything to me without you explaining the reasoning behind your declaration. Citing a book won't really do anything for me because I haven't read the book. We're at a dead end here. Obviously I won't read the book because it's too long to read right now and obviously you won't explain the book for the same reason, so for this point the argument is over... we reached an impasse and can only agree to disagree unless you decide to explain the book to me.

>I have a feeling we're dealing with a small bit of motivated reasoning with respect to interestingness here.

I'm interested up to a point. If the point is a paradox I'm not interested in exploring the paradox. If that's the direction you're taking your argument then it's an impasse. Either way we're just debating nomenclature here.


> The industry is ripe for formality in this area as we've been going in circles for decades. We are dealing with computers here: virtual idealized worlds ripe for systematic formalism.

The "industry" is not about computers, it is about the use of computers to solve people's needs. You're confusing the engineering with the need.

Sure, people wrap up stupid or simple ideas in layers of abstraction, usually because they want to sell consulting.

But this article was more about looking beyond the boundaries of simple software development, needing to understand the context of where these systems operate.

Essentially, your argument is proving the early part of the article, where the author is comparing how some juniors are excellent "engineers" but can't move up to higher levels of abstraction, vs others that might not deal well with individual details, but have a better understanding of how the systems they develop need to interact with the rest of the world.


>The "industry" is not about computers, it is about the use of computers to solve people's needs. You're confusing the engineering with the need.

No I'm not. Every "need" can be formalized into a specification.

>Sure, people wrap up stupid or simple ideas in layers of abstraction, usually because they want to sell consulting.

So?

>But this article was more about looking beyond the boundaries of simple software development, needing to understand the context of where these systems operate.

And I'm saying this article is just listing examples and making analogies. It doesn't formally define what a system is and it doesn't talk about any ways to use this theory to optimize a system nor does it try to define what optimize is. It's just one out of a million articles that tries to talk about "design" and create an analogy or metaphor out of it while teaching the reader absolutely nothing new.

>Essentially, your argument is proving the early part of the article, where the author is comparing how some juniors are excellent "engineers" but can't move up to higher levels of abstraction, vs others that might not deal well with individual details, but have a better understanding of how the systems they develop need to interact with the rest of the world.

Yeah, I didn't care for his argument, he's free to make that analogy. Think what you want on how I "proved" it... The meaning of the metaphor itself is irrelevant to my topic, it's the fact that the metaphor exists and basically introduces nothing new to the concept of design that is my part of my complaint.


I cannot agree more. There is way too much B.S. in the field of software and it's a joke that we get away with calling ourselves "software engineers" or "computer scientists" for how little analytical decision making or scientific method applies.


> monolith vs. microservices? Which is actually better? ... the resulting non-answer people come up with is: "Depends on your use-case."

But that's the truth of the matter. _Neither_ is "actually better" without qualifications.


Prove this fact formally for each qualification. Resolve the debate once and for all. You can't, which is my point.


> You can't,

I don't need to.


Even mathematicians are not all-knowing. There are lots of theorems that we suspect to be true that have never been proven to be true. In fact, Godel's incompleteness theorems state that there will always be statements that are true, but can never be proven to be true.

Fields should be as mathy as they can be, but no mathier. Otherwise it's just fake formalism, as you said yourself.

> Let me put it to you this way. Anytime you see the word "Design" it refers to a field where we have little knowledge of how things truly work.

You are setting up a false dichotomy between knowing everything (which nobody does, certainly not scientists or mathematicians), and knowing nothing.

There are plenty of fields with "design" in the name that are quite rigorous: analog circuit design, digital signal processor design, microprocessor design, compiler design, etc.


>Godel's incompleteness theorems state that there will always be statements that are true, but can never be proven to be true.

Not exactly. It says that incompleteness is always true for axioms of a consistent system. It does not have to hold for an inconsistent system of axioms.

>Fields should be as mathy as they can be, but no mathier. Otherwise it's just fake formalism, as you said yourself.

If I used the words fake formalism, i'm likely referring to the usage of big words and titles to falsley promote a sense of legitimacy. Like "System architect"

Fields should be as mathy as we can make them as developing a formal language around any concept aids in exact analysis.

>There are plenty of fields with "design" in the name that are quite rigorous: analog circuit design, digital signal processor design, microprocessor design, compiler design, etc.

Yeah but all of these fields involve intuitive guesses. Given specific requirements can you derive via calculation the optimal circuit design/digital signal processor design/microprocessor design/ compiler design?

These constructs are "designs" rather then calculations and they, as a result, reflect a lack of knowledge on how to calculate the best possible "design."


> Anytime you see the word "Design" it refers to a field where we have little knowledge of how things truly work.

There are branches of science whose theories are more often derived from data and observation than from axioms. Just because you may not be able to break something down into first principles, does not mean that the knowledge is useless.

Design, too, can be data driven. UX and UI design can be analyzed using A/B tests, or by observing patterns of user behaviour.

It might be true that much of systems design is based on anecdotal evidence and intuition, but I don't think that's enough of a reason to ignore the field of design entirely.

For example, the concept of abstraction in software design may be based primarily on the intuition that human beings are bad at holding too much complexity in their minds. But any software developer who has written more than one program will agree that abstraction is crucial to good design.


>There are branches of science whose theories are more often derived from data and observation than from axioms. Just because you may not be able to break something down into first principles, does not mean that the knowledge is useless.

Yes science is different from logic. Programming functions happens in a limited axiomatic world that simulates logic. This makes computer science a bright target for mathematical formalization. This is entirely different from science.

>It might be true that much of systems design is based on anecdotal evidence and intuition, but I don't think that's enough of a reason to ignore the field of design entirely.

I never said ignore the field. Often we have no choice. No one calculate the best work of art. Art is created by design.

>For example, the concept of abstraction in software design may be based primarily on the intuition that human beings are bad at holding too much complexity in their minds. But any software developer who has written more than one program will agree that abstraction is crucial to good design.

The concept of abstraction, good abstractions and bad abstractions can be separated from design and formalized into exact definitions. That is my argument.

The reasoning behind why a human would want to do that is irrelevant.


> This makes computer science a bright target for mathematical formalization. This is entirely different from science.

Yes, but until computer science is fully formalized we will still need design, and can benefit from science. If/when it is fully formalized and creating software becomes an automate-able optimization problem, we will no longer need system designers. Or software developers, for that matter.

> I never said ignore the field.

Not explicitly, perhaps, but it really does read like that is what you're implying. You said multiple times that anyone labelled as an architect or designer knows nothing and is peddling bullshit. If we can agree that design is amenable to scientific inquiry, then it would make sense that some designers do know things.

Re-reading your original post, I realize now that I chose the wrong quote to respond to. I do agree with you that anything labeled as "design" is necessarily constrained by a lack of knowledge. Any time there are multiple ways to solve the same problem and there is no a priori way to figure out which solution is the best, we are forced to design. My point is that this describes software development, which as noted above has not been fully formalized. Writing software is design, and therefore needs designers.

> Often we have no choice. No one calculate the best work of art. Art is created by design.

Are you implying that when it comes to software, we do have a choice?

> The concept of abstraction, good abstractions and bad abstractions can be separated from design and formalized into exact definitions. That is my argument.

This is where you lose me, I'm not sure I understand what you mean here. Abstraction is a design principle, and developers argue constantly about whether a given abstraction is good or necessary. The motivation behind abstraction as a principle hinges on how you define "too complex", and that sounds very subjective to me—the opposite of formal.


>This is where you lose me, I'm not sure I understand what you mean here. Abstraction is a design principle, and developers argue constantly about whether a given abstraction is good or necessary. The motivation behind abstraction as a principle hinges on how you define "too complex", and that sounds very subjective to me—the opposite of formal.

It's subjective because it still exists in the realm of design. Once we formalize these notions the definitions become more clear. The key is formalization of fuzzy words. The definition of complexity is subjective yet there is a shared definition that we all agree on hence we won't be able to communicate. The key is pinpoint the exact shared metric that causes us to consider one piece of code more complex than another piece of code. Not an easy task. Formalization is very much a deep dive into our internal and linguistic psychology.

Take for example "luck." The concept of luck was formalized into a whole mathematical field called probability. Again not an easy task but doable for even fuzzy concepts like luck.

>Not explicitly, perhaps, but it really does read like that is what you're implying. You said multiple times that anyone labelled as an architect or designer knows nothing and is peddling bullshit. If we can agree that design is amenable to scientific inquiry, then it would make sense that some designers do know things.

Maybe a better way to put it is like this: Many design principles are bullshit simply because we don't know whether two opposing design principles are better or worse. There's a lot of rules of thumb that happen to work but there's a lot of stuff that's pure conjecture and unproven and even stuff that doesn't actually work. For example OOP was previously the defacto way of programming, now it's a highly questioned as a methodology. It brings all the "experts" who promoted it as the one true way into question.

Additionally if you meet someone with the title "Architect" a better title for them is "Technical Manager" because that's what they actually are. The title "Architect" implies that they have specialized formal knowledge when they in fact are usually just managers with more experience. Really that's the only difference, any typical engineer, holding all other things equal has pretty much the exact same informal knowledge that an architect has, after all it's all informal anyway.

>Are you implying that when it comes to software, we do have a choice?

I'm saying what you already know. We do have a choice to move software in the direction of formalized methods for things labeled as "design" no such choice exists for art.


> Programming functions happens in a limited axiomatic world that simulates logic.

You sure you're not mixing up "programming functions" with "powerpoint presentations" here?


Dude whats up with that comment. Are you mocking me?

No. I'm talking about how a computer is basically a logic simulator. You don't need to use empirical methods to prove things in a logic simulator, you just use logic.


You're free to ignore Conway's law[1] and the like at your own risk.

I would argue that if software or organizations were straightforward to apply such rigor that you'd see something closer to hardware where formal verification plays a much larger role(to be fair there's also the aspect of hardware/ASIC respin costs here as well). As a byproduct I think you see much less autonomy or even diversity of job roles in that space because a good portions of the problems are considered "solved".

This is not a knock against hardware(it's hard!) but more that the design space of large scale software projects involve a diverse set of teams and people so you see similar patterns arise.

[1] https://en.wikipedia.org/wiki/Conway%27s_law


You are making the assumption that the actual science of Computer Science has anything to do with "Software Engineering". The formal theories of CS, eg proof of correctness etc, have very little to do with day to day programming. The only example I know of true software "engineering" was the NASA Space Shuttle software team.

But they had the benefit of a fixed platform and total control of the software environment. Attempts to implement similar levels of "engineering" (eg CMMI etc) always fail unless they have similar constraints.

Software development is not "engineering" in the classic sense, where there are a set of formalized approaches, acceptance criteria etc to specific areas of implementation.

People wrap things up in "process" to try to present a picture of "engineering" but it's a Potemkin village.

On the other hand, software "architects" are doing what building architects do. They apply heuristic approaches, hopefully using patterns that have shown to be successful. The reason architectual patterns appealed to programmers is because both fields are not engineering.

The trouble is that many software architects are focussed on the low level engineering. Architects that draw "C4" diagrams that go lower than maybe 2 levels aren't being architects.

Design doesn't travel in circles, it learns from experience. It understands that there needs to be constraints, otherwise the problem to be solved is unbounded. It understands that the system or product has to interact with its users and that the interaction needs to be as seamless as possible, so affordances are vital.

Don't judge a professional field ("design") by empirical science standards.

Design is not engineering.


>The trouble is that many software architects are focussed on the low level engineering. Architects that draw "C4" diagrams that go lower than maybe 2 levels aren't being architects.

I would argue software architects aren't doing any engineering. There doing something similar to art. Drawing and painting. The main problem is that these big words like "architect" deceptively paint a picture as if they're doing anything other than art.

>Design doesn't travel in circles, it learns from experience. It understands that there needs to be constraints, otherwise the problem to be solved is unbounded. It understands that the system or product has to interact with its users and that the interaction needs to be as seamless as possible, so affordances are vital.

Much of it does travel in circles because what happens is people encounter two differing designs but they can't really know which design is better. Additionally it doesn't necessarily always improve because nobody can truly define improvement.

Take the monoliths vs. microservices argument. It's cycled away from monoliths and back. Take FP and OOP, it's also cycled from FP to OOP and back to FP again. Or what about the newest framework flavor of the week in javascript that tries to improve on all the faults of the front end but ends up being a step to the side? These circles are everywhere.

>Don't judge a professional field ("design") by empirical science standards.

I judge it by whether it's improved or not. It has not. Technology shifts but it doesn't shift in the direction of improvement. Many shifts are just horizontal genetic drift.

>Design is not engineering.

"Engineering is the use of scientific principles to design and build machines, structures, and other items, including bridges, tunnels, roads, vehicles, and buildings.[1] The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. See glossary of engineering."

keyword: "design"

source : https://en.wikipedia.org/wiki/Engineering


I understand the core of your rant is "the lack of formalization prevents us from truly knowing what an optimized design is so we just guess." You would prefer society would work on developing better formal theories instead of ... what? Doing business, solving concrete problems, and building stuff?

I believe you can do a rigorous design/architecture in theory. In practice, we cannot handle the complexity and uncertainty. Usually at some point you have a human as part of your system to close some feedback loop and then you are discussing fuzzy topics like psychology.

You use programming languages as examples and claim that Rust is "closer to an optimum solution." Most of the other replies are about our inability to agree what that optimum should be. Rust certainly loses in terms of ecosystem maturity to C++, so it is further away from the optimum in that aspect and thus many people rationally decide not to use it.

I agree with you that design and architecture is a lot about intuition and gut feeling and should be more formal. However, we don't have to time to wait for the formal theory. Foods needs to put on the tables, so income needs to be generated, so business needs to be done, so decisions with incomplete information and lacking formal theory must be made by biased humans. Unfortunately, the complex and wicked problems are usually the more important ones. If you pick C++ instead of Rust, the organization will probably survive anyways. If you pick the wrong technology or person it can kill a company.


>discussing fuzzy topics like psychology.

Psychology is fuzzy, but the study is quite actually formal in terms of empiricism. You may be thinking about psychotherapy? Not sure, and also not familiar with psychotherapy so take it with a grain of salt.

I'm not invalidating design and all things without rigor. I'm invalidating specific trends where we end up going in circles because of lack of rigor even though it doesn't need to be that way. Software design is one such area.

>I agree with you that design and architecture is a lot about intuition and gut feeling and should be more formal. However, we don't have to time to wait for the formal theory. Foods needs to put on the tables, so income needs to be generated, so business needs to be done, so decisions with incomplete information and lacking formal theory must be made by biased humans. Unfortunately, the complex and wicked problems are usually the more important ones. If you pick C++ instead of Rust, the organization will probably survive anyways. If you pick the wrong technology or person it can kill a company.

Sure agreed, I never said otherwise. I'm more remarking about the evolution of the industry. How much of this decade was an improvement over the previous decade? How much of it was a repetition of the same historical mistakes made again and again? I am proposing that formalism should be used to break out of the loop. I am not proposing that you use formalism to do your job. At least not yet. Think of it as where should the Boss put his funds for R&D? Formalism is it.


If you prefer formalism, the cybernetics literature should appeal. I prefer the Systems Dynamics approach, because it's formal enough to avoid woo, but informal enough to avoid avoidance.

But it's a mistake to create the general conclusion, from this blogpost, that a program aiming to describe, catalogue and comprehend phenomena across many fields in common terms is a fool's errand. It's been an ongoing program of research for most of a century now.


Can you suggest a good introductory book for Systems Dynamics?

The last time the topic came up, I read the popular one from Donella Meadows but I found it too shallow [0]. Now I'm considering "An Introduction to General Systems Thinking" because it was written a computer scientist, Gerald Weinberg.

Also, is there any free tool to play around with the modelling of feedback loops? LOOPY [1] is too simple quickly.

[0] http://beza1e1.tuxen.de/thinking_in_systems.html

[1] https://ncase.me/loopy/


Ok, saw "Sterman's Business Dynamics" in the other comment. :)

Any good tool suggestions?


Sheetless looks promising: https://sheetless.io/

There are also the classic tools like Stella, but they're expensive.


I found https://insightmaker.com/ which looks quite capable and polished.


>But it's a mistake to create the general conclusion, from this blogpost, that a program aiming to describe, catalogue and comprehend phenomena across many fields in common terms is a fool's errand. It's been an ongoing program of research for most of a century now.

It's not a fools errand but the way this article and many others approach this task is a fools errand.

If he thinks colloquial use of "systems design" applies to various fields outside of computing, then he should formally define what he means by a "system" then prove his point. We don't need another blog post about some "design" metaphor.


I had a slightly different complaint from yours, which is that the author doesn't seem conversant with the kinda-sorta-formal fields of study that already exist.

Take for example the "chicken-egg" problem. Economists study this under "multi-sided markets" (which the author touches on, but late in the example), as part of the study of path dependency. Systems dynamics researchers have framed the economics work in their own terms of stocks and flows, but the essential structure is the same and can be reduced to equations. It's mostly calculus, sometimes there's some linear algebra.

The best book in my view is still Sterman's Business Dynamics, which is much broader in scope than the title suggests (it's a play on earlier book titles). There will hopefully be a second edition in the next year or two.

Edit: I should note however that when I see "system" I pattern match on what I know best. But there's a whole discipline of "System Engineering" which comes mostly from defence, aeronautical and astronautical fields. It has a high focus on formalisms to try to govern immensely complex technological efforts.


Formalism as you describe it is an abstraction that we invented to help us model stuff. It's not more true, or more accurate, or strictly better than any other abstraction.

The fact that design and systems and law and economics and philosophy and etc. etc. are ultimately squishy and informal is actually a reflection of the truth. Down there at the bottom of everything? It's humans. Subjective, emotional, informal humans. That's the baseline. Use formalism where it helps, absolutely, but don't trick yourself into believing it's a better way of being, of doing. It's not.


It is an abstraction, that much is true. But formal languages are built on top of logic. The formal abstractions force us to stay true to logic and that is why they are effective.

Hence the term "formal language" over just "language." All language is an abstraction over something. The term "formal" indicates a deep connection with logic. What this means is that formal language is developed with a set of axioms, so as long as the axioms hold true the rest of the language does as well.

The same cannot be said for other abstractions.

Too put it plainly... formalism is abstraction, but not all abstractions are formal, don't trick yourself into believing otherwise.

>Down there at the bottom of everything? It's humans. Subjective, emotional, informal humans.

The baseline is not humans. We are one part of the universe not the center. Logic is the baseline. Logic is the thing universal to not just all humans, but all living beings and all things. This fact has been repeatedly observed to be true and therefore one is more wise to hold logic as a universal truth rather than a anthropocentric view of the universe.


> The baseline is not humans. We are one part of the universe not the center. Logic is the baseline.

Logic is not the baseline of human constructs.


> Let me put it to you this way. Anytime you see the word "Design" it refers to a field where we have little knowledge of how things truly work.

http://www.anft.net/f-14/f14-history-f14a.htm

13 matches for "design".


And all 13 parts of that aircraft that was "designed" they cannot know whether or not that design was the best optimal design possible. It's just a best guess, because they lack the ability to calculate the optimal configuration for that aspect of the aircraft, hence why they turned to "design" instead. That is for sure a lack of knowledge and is exactly what I'm referring to.


This is good and represents the dilemma faced by engineers - should I make shit up or preserve the integrity we have. I agree with most of your argument, but have to say that what you say mostly applies to shitty designers. There are good designers too who consistently do good work


Apenwarr shares thoughts on system design, career paths, products, and innovation versus disruption.


The statement

"Which came first, HTML5 web browsers or HTML5 web content? Neither, of course. They evolved in loose synchronization, tracing from way before HTML itself, growing slowly and then quickly in popularity along the way."

Prompted me to abandon reading the rest of the article.

There were committees and centralized coordination efforts involved in HTML5, just like the languages before it, for the standard and for the participating vendor own implementations. Development of the standard came first.

These sweeping generalizations and grand conclusions make an impression that the example was pulled along the ears to serve some grand statement.

It is the statement of the article that appears to be loose.


This seems like an unsympathetic reading. The point seems to be that standards usually aren't simply invented out of nothing; they take existing practice as input.

Sure, there was a standards process, but the people involved looked into what browsers did and how popular particular HTML constructs were in the millions of websites that already existed when the HTML5 standard was being written, and tried not to break them. The input to their design process came from many sources.

Before HTML5 formally existed, there were many web pages that more-or-less did what the standard said they should do, and browsers that more-or-less worked with those web pages.


Exactly. For the IETF, defining a draft standard (usually) requires two independent implementations first. Efforts to define standards before trying to implement them are usually disasters.


You abandoned the article because you disputed the characterization of HTML5 development in a throw-away example that had nothing to do with the actual topic of the article just about exactly midway through it? Get out of here. You were already annoyed by the article's tendency to use the sweeping and made-to-purpose generalizations that you feel that this represents.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: