Hacker News new | past | comments | ask | show | jobs | submit login
Akin's Laws of Spacecraft Design (umd.edu)
240 points by khet on March 23, 2016 | hide | past | favorite | 70 comments



> 36. Any run-of-the-mill engineer can design something which is elegant. A good engineer designs systems to be efficient. A great engineer designs them to be effective.

I feel like you could permute "elegant", "efficient", and "effective" in any way here and still arrive with a maxim that sounds nice but doesn't really say anything.


I don't agree, but there is a level of interpretation at play in here:

elegant: builds a system in a way that is technically satisfying

efficient: builds a system in a way that uses the least resources

effective: builds a system in way that best solves the problem

I've built a lot of elegant systems that didn't solve the right problem but sure looked good.

I've also build systems out of leftover servers and laptops that were super inexpensive.

But I've learnt that there are times where spending money is the best way to build the system that the customer actually wants (and will pay for).


> "elegant: builds a system in a way that is technically satisfying efficient: builds a system in a way that uses the least resources effective: builds a system in way that solves the problem"

No, 'Effective' solves the problem, but not necessarily in the "best" way possible. In my opinion, the quote as stated is just wrong. Effective means "gets it done", but doesn't convey that it was done optimally.

It makes much more sense to me to say...

"Any run-of-the-mill engineer can design something which is effective. A good engineer designs systems to be efficient. A great engineer designs them to be elegant."


The way I read it, that's exactly the point.

It makes much more sense to you (and me) that way but the point is our common sense about what makes an engineer great is wrong.


How about "Any run-of-the-mill engineer can design something which is either elegant, efficient or effective, a good engineer designs systems to be two out of the three, a great engineer designs them to be all 3."


I think for elegant solutions you often have to define the problem right. From my experience often the problem is not really that clearly defined so it's hard to build an elegant solution.


I'm pretty sure effective just means "solves the problem".


Efficiency is one of the basic tenets of Engineering. Almost every problem can be reduced to "How can this system be made more efficient?"

In real world systems you often have competing constraints. In construction if someone asked you to make a beam section more efficient you'd need to consider weight, cost, weld-ability, yield strength etc. All competing against each other.

An effective engineer is someone who can see the whole picture of the system and where his design fits into it. Being able to make the most appropriate efficiency gains in areas where you get the greatest benefit.

I've always considered a technically elegant solution to be something more novel - an approach that had never been considered before or consider and dismissed as unworkable in the past.


They are ordered by increasing contact with reality - no elegant plan survives that. The constraints of available resources and the details of what it is actually for take their toll.

Elegance requires constraints consistent and few, whereas reality is a seething mess of capricious special cases - the far end of the spectrum from pure consistent abstractions.

But not only that it's easier, he also means that a solipsist chasing idealized butterflies is not an engineer. An engineer is meant to solve real, practical problems.


"To design a spacecraft right takes an infinite amount of effort. This is why it's a good idea to design them to operate when some things are wrong."

I feel like this applies to everything. I mostly work on ecommerce and analytics systems, and I feel like I'm having to make this same argument all the time.


There is an interesting, opposite mindset: The program will not function unless nothing is wrong.

It was surprising how reliable the programs turn out to be. But the shocking part is how small they are.

I'm starting to be convinced that the only way to write reliable code is to spend most of your time trying to remove code. And not at the expense of clarity; tricks like that don't matter.


Very small programs don't solve complex tasks. If you plan to glue many small, correct programs together you'll likely get unintended behavior from the collection. You're just shifting the complexity around.


And this is why subdividing code too finely is as bad as not finely enough.

In an informal way I think of it as a good design being 'square', as in the design is distributed evenly across all scales. So a small program might have ten functions of about ten lines each. A bigger one might have ten files, each containing about ten functions of about ten lines. It's different in every project but solid, elegant, well structured systems always seem to feel 'square'.


...you'll likely get unintended behavior from the collection. You're just shifting the complexity around.

This is one of the valid criticisms of OO. "Gluing" small (mostly) correct objects together can produce unforseen results in the aggregate, or can be hard to understand in aggregate.


   Very small programs don't solve complex tasks.
That's not quite right (some simple algorithms solve complex tasks). I think it is more true that very small programs can't model or facilitate complex interactions.

Your point about shifting complexity around is a good one.


Well yeah, the universal turing machine has a pretty small program and solves all kinds of problems. But that's not really what I imagine when I say "a simple program".


I wasn't being that abstract, I'm thinking of real world systems.

Some numerical solvers, graph algorithms, control systems, etc. have this property. They are definitely what I would call a simple program, even if the math behind them is not. YMMV.


That's why you write a very small core with the essential tasks, that include updating the non-core. And then add the complexity in another layer, where failure is handled by the core.


Something interesting about your point is that people will eventually land on this fundamental design as long as they are 1) curious enough about programming to learn new things and 2) continuously improving their abilities. I think what surprised me the most over my lifetime is the number of people writing code that do neither of those things.


The collection is a program.


As a contradiction: e=mc^2


Which is just a series of symbols. Using it or explaining it is far more complex.


What makes you think that these approaches are opposite?

When you introduce containers in some form, it becomes reasonable to add monitoring for errors. Make sure that there is redundancy. Now if error conditions are found, it is OK to lose the container. Another will be along shortly to pick up where you left off.

This approach is often used in distributed software. For example a good MapReduce framework will take care of detecting failing machines and replacing them, hopefully with perfect results. Erlang uses the same approach, and some of the most reliable software in the world has been written with it.

This idea does not just apply to a digital world. For example each rocket in a Falcon 9 is able to detect that it is not working right and shut down. The overall system is designed to work if any 7 rockets still fire.


This applies to some things, not to everything. In some systems the right thing to do when an error is detected is to halt everything. In these systems, the cost of continuing to operate when errors are present is much higher than the cost of halting operations.

Take high-frequency trading, for example.


Erlang (software) and Tandem (hardware) take the sensible approach of halting everything that the error has touched, while allowing the overall system to proceed.


There's only finitely many designs that will fit in a Hubble volume. It's hard to believe that choosing among this finite number of choices takes infinite effort.


This one really tells a story:

"A bad design with a good presentation is doomed eventually. A good design with a bad presentation is doomed immediately."

Both sides of the issue. Doesn't matter how right you are if you can't convince anyone. While bad plans can persist in sucking up resources because they were 'sold' well.


It is remarkable that most organization try to "fix" this by teaching everyone to present (i.e. sell ideas) better, rather than teaching people how to better listen and evaluate new ideas.


As somebody teaching presentation skills when I'm not wearing my engineering hat: The reason I teach (and as far as I know, the reason my org wants me to teach) is not for people to be better able to "sell" ideas.

It's because a lot of engineers are horrible communicators. Not a surprise, and not their fault - it's never been part of their training, how would they be good at it?. But it means that often, the way they present ideas is at best only understandable by fellow engineers with experience in the domain. And often not even by those.

And that's what most corporate presentation training I've seen focuses on - how can you say what you want to say more clearly. (And, equally important, how can you deal with the stress of being on stage in front of a set of people).

Presentation is rarely about "selling", but about communicating an overview concisely and cleanly.

(It's also incredibly hard to teach salesmanship. It requires a lot of charisma and empathy, and that's not something you teach in an afternoon or two)


The reason for this is quite simple. Presentation skills are taught to underlings, but evaluation skills have to be taught to decision makers. It so happens the decision makers also decide who has to put work into training.


I disagree. Everyone is someone else's underling, even at the top of the corporate hierarchy, where you have the executives reporting to the investors/shareholders.

It then follows that everyone has to be taught presentation skills, just as they're taught evaluation skills.


Also relevant: Beginning Engineer's Checklist

http://www.piclist.com/tecHREF/begin.htm

If a ten commandments of growth hacking ever comes into being, my hope is the first law states: Thou shalt not spam ;)


I can date that list to the early 90s because the amazon link is to the old edition of AoE and I lived thru the Motorola anecdote in the early 90s "Hey world, design something using our new cutting edge microcontroller" "Sorry world, my bad, our sales guys did such a good job nobody but GM will get a shipment for the next three years". I'd add a corollary to that anecdote that the market to scale software is infinite, but not hardware, theres plenty of people out there who can rewrite the code for a 8051 faster than anyone on the planet can magically make late 68hc11 shipments arrive.

An interesting 4th book to add to the book list is the ARRL handbook. If there's anything you don't understand between those covers, then you just identified what you need to learn to be a well rounded engineer. AoE is still good for strictly design topics. Can't design anything better than you understand the application is another semi-related good one.



Hahaha, I loaned out my copy of AoE and ended up having to buy a second copy. Truth.


My hope is that the 0th law is: Thou shalt not call yourself a growth hacker


> 38. Capabilities drive requirements, regardless of what the systems engineering textbooks say.

This is so true. Yet, its the inverse of what all textbooks say.

How can it be this way? This is a huge blind spot right on the middle of systems engineering.


It's because your customer (who writes requirements) typically doesn't know 1) what they actually want, or 2) what capabilities actually exist. This is why a good customer will allow the company to help them write their requirements.


> 14. (Edison's Law) "Better" is the enemy of "good".

That's a traditional russian saying, never seen it attributed to Edison before.


Hmm traditional French saying as well: "Le mieux est l'ennemi du bien"

I too call shenanigans on the Edison bit ;)


Assuming Edison is Thomas Edison, the Wikipedia page about him has nothing to say about any law of his. I suppose they meant another Edison, though that is sort of cheating ("Obama's law: such and such" -- oh, I meant Jamie Obama)


Why that would be considered cheating?

I often google names from personalities that I know and get a footballer or a singer in the first results.


Well when using a very well-known name (such as Edison, Newton or Obama) it is expected that it's about Thomas Edison, Isaac Newton or Barack Obama, not some random dude who happens to have the same last name. If you want to quote Jason Edison, say it's Jason Edison's law to prevent misattribution instead of piggy-backing on the well-known name.


The expectation is bounded to the audience and the context of the text. Maybe Jason Edison would be more obvious than Thomas in a given setting.

Maybe there's an Edison that spacecraft folks know and we don't.


The Taylor series and Coolidge effect want to have a word with you.


As is common with household names, a fair percentage of the lore associated with it is nonsense.


Voltaire is also attributed to saying that (in French of course). General Patton said it differently, and I quite like his take since he adds the aspect of time to it: "A good plan violently executed today is far and away better than a perfect plan next week."

http://www.edbatista.com/2009/04/voltaire-patton-perfection....


Wikipedia says it's Voltaire quoting an ancient Italian saying.

https://en.wikipedia.org/wiki/Perfect_is_the_enemy_of_good

Old sayings don't really have frontiers, so it might as well be an old Russian saying.


Old sayings don't really have frontiers, so it might as well be an old Russian saying.

I thought the classic Russian problem is having too many frontiers.


I actually wrote a paper entitled this. My favorite alternate version is William Hazlitt (1778–1830):

"He who is determined not to be satisfied with anything short of perfection will never do anything to please himself or others."


This saying coincides with the ever-popular "done is better than perfect." My dad (a microwave engineer) taught me a variation: "Perfect is the enemy of good enough."

The concept is very important to learn, especially for those with a strong sense of idealism.


"You know what else is the enemy of good? Terrible!"

http://wondermark.com/975/


I think the attribution to edison is tongue-in-cheek reference to the War of Currents. https://en.wikipedia.org/wiki/War_of_Currents


It was probably once Tesla's Law


> 29. (von Tiesenhausen's Law of Program Management) To get an accurate estimate of final program requirements, multiply the initial time estimates by pi, and slide the decimal point on the cost estimates one place to the right.

I was once commenting to a friend that my project was taking about 12 times longer than planned, and he said, "ok, about a factor of 4pi". All my time estimates get a factor of 4pi increase now.


"18. ... Too much reality can doom an otherwise worthwhile design, though."

That is entirely too true.

"20. A bad design with a good presentation is doomed eventually. A good design with a bad presentation is doomed immediately."

And that explains NASA. (Except for the corollary, "A good design with a good presentation is doomed, too.")


Colin Chapman's law of race car design: "simplify, then add lightness".


Keep in mind the following is also attributed to him:

"Any car which holds together for a whole race is too heavy."




SLS, meet Akin's Law #39. Law #39, meet SLS.


On the other hand, new launch vehicles are good for jobs. Well, probably only in Huntsville and nowhere else, but Sen. Shelby will see to it that nothing happens to that work, even if its to the determinant of NASA as a whole.


I always suspected that to be true regarding Apollo project.

Well...

Today I see how much Soyuz-2 costs vs. how much spacecraft Soyuz costs... and wonder - may be it's still trickier to create Apollo and LEM than Saturn-V, even though in Russian history H1 turned out to be harder to make than the payloads...

Rockets are doing roughly the same thing, since 1957 - get things to orbit. While payloads keep changing - with all those stations, telescopes, probes, monitoring satellites etc. Rockets aren't that much "rocket science" anymore - but the payloads surely are. So may be - just may be - this law can be amended, a little bit.


This reminds me of Jon’s Law: any drive powerful enough to be interesting is powerful enough to be a weapon of mass destruction.


Assume that you could discharge a full Tesla battery in a picosecond. It will be unpleasant to be nearby.


11. Sometimes, the fastest way to get to the end is to throw everything out and start over

If this ever happens to me in my career I will start believing in god or whatever you want. Dealing with "legacy" tech has been the only constant in my life across several unrelated industries (even in the supposedly brash and agile games industry).


I believe most of the laws could be applied to building a minimum viable product.


"2. To design a spacecraft right takes an infinite amount of effort. This is why it's a good idea to design them to operate when some things are wrong . "

one of the key definitions of antifragility.


The "Miller's Law" entry seems wrong in more than one way...

Does anybody know to which Miller it refers to?


omg




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: