Hacker News new | past | comments | ask | show | jobs | submit login
An Astronaut’s Guide to Mental Models (fs.blog)
260 points by yarapavan on Feb 12, 2020 | hide | past | favorite | 57 comments



I'm conflicted about the whole "Mental Model" thing that has sprung up in the last couple years. In principle, naming and sharing useful patterns of thought is useful. In practice, every time I see them like this they make my skin crawl. Mental models are supposed to just be models, but, like, in your mind. People are building a pyramid on a mole hill instead of just noticing the hill...

Ah, just realized the applicable Mental Model according to TFA. They're blurring the distinction between the map (mental models) and territory (actual thinking).

Ed: first paragraph was largely superfluous.

Ed2: "building a pyramid on a mole hill" describes so many things. Did I just invent a Mental Model? I think I did.


As much as I personally like and respect Shane from Farnam Street, I don't like the current trend of mental model writing as much. I'm probably one of the more vocal critics, in fact (see: https://commoncog.com/blog/the-mental-model-fallacy/). The problem with mental models as a thing these days is that it conflates three different things:

1. Frameworks — 'mental model' is often used as a synonym for 'framework', which (as any competent practitioner knows) come with its own set of problems.

2. Thinking Tools — so, things like 'reasoning from first principles'.

3. Mental models, as defined by Piaget & Papert — the original definition of the term, as in 'tacit mental representations of expertise'.

Because mental model writing conflates the three, it can be very difficult to get anything useful out of it. Each of these three categories come with their own strengths and caveats. It is important to separate them.

That said, I think the fad has its benefits: it's led to a resurgence of systems thinking. Which is net good, I think.

A more nuanced, comprehensive summary of my criticisms may be found here: https://commoncog.com/blog/the-mental-model-faq/


I'm with you. The way the trend has reduced "mental model" into a synonym for "concept", basically. Everyone wants life hacks, but the value of "model thinking" is to avoid hacks (IMO) by externalizing your cognitive processes and make them more amenable to improvement.

One value of models is that they reduce the need for memorization, but blog posts like this ask you to memorize more! The person below who said they have 100s of mental models written down makes me go "?????".

Second-order thinking isn't a mental model, it's a way to draw more conclusions out of your models (and therefore conduct more, better experiments).

https://twitter.com/jfarmer/status/1225807692561813510


I'd bet they'd call these Marcus Aurelius mental models:

http://classics.mit.edu/Antoninus/meditations.html


I'm confused by the portrayal of these as mental models. These are aphorisms[1]. A mental model is a reduced-complexity representation of something that allows prediction or estimation of something that is otherwise incomprehensible[2].

This "Map is not the territory" one in particular is more of a word of wisdom or admonition against oversimplification than a process simplification that allows quick decision making.

Second-order-thinking is not a mental model. It is a way of constructing mental models about things. You can add second-order-thinking to your mental model of how stones roll down hills or how people will react to your speech. They must be interrogated as you contemplate actions to determine the best outcome, and therefore the best action[3].

TFA is clickbait.

1. https://en.wikipedia.org/wiki/Aphorism

2. https://en.wikipedia.org/wiki/Mental_model

3. http://www.modeltheory.org/papers/2005HistoryMentalModels.pd...


Not all of them are aphorisms. "Circle of competence" is closer to what you could call a "model", even if it is pretty trivial. I'm pretty convinced now that part of the problem with the Mental Models meme is that they aren't all the same kind of thing.


Circle of competence is a guiding principle of where to trust your models. It is not a model. How can I "query" the model to "predict" an outcome about my competence at a highly general level? I can't, but I can query a model of (e.g.,) an interviewer to see how he might respond to a joke, get a response, and then apply a high value of uncertainty to that response because it is outside my circle of competence (telling jokes is not something I'm good at).

"Don't trust your unvetted models" is a better way of saying this.


It'd be nice if we could move the conversation about Mental Models into "how to evaluate such a model for quality" as soon as possible.

Otherwise the novelty of the most recent proliferation of the term is pushing critics to decide "good or bad" regarding _mental models in general_, which is, to be frank, pretty dumb. Certainly such a dichotomy is much worse by itself than a broad field in which some poor models, applied poorly, are mixed in with a lot of really great examples.

And I guess this doesn't even get into the value proposition of a task like cataloging mental models and/or quantifying them via simple lists...mental models are a core aspect of human psychology and there's really an impressive amount of depth to the topic. Ramming a bunch of them together for quantity seems like a good way to make readers feel either over-confident or overwhelmed.


Questions answered from different mental models will get you different answers. Just as important as having these different "mental models" or "perspectives" (if you will) is discerning which is correct. AKA thinking for yourself. In my mind, a mental model is only useful for generating ideas from which you can gain another insight. No model has the right answer for every question.


Ah, but that's the subjective-leaning perspective on mental models. The yin of our yin and yang here!

The objective-leaning perspective may reel a bit from the "think for yourself" critique by the subjective side, but it counters with "don't reinvent the wheel" and tends to be hyper-efficient at integrating outside thought into the current context.

(BTW thank you for your comment, this gets into the good stuff)


All very true, but I suspect you'll still find that some models are more frequently applicable, some more rarely, and some very frequently deceptive unless handled carefully. Those are good (albeit highly subjective) measures of quality.


Would be nice though to map which models are most relevant for specific questions.


Validity, accuracy, reliability...science has helped us go a long way toward sorting it out.

We've come so far in our measurement that there has begun a stringent internal critique of our time-tested models for measurement of those criteria.


I fully, strongly agree with everything here, especially regarding the folly of trying to evaluate "mental models in general". To clarify, lest I sound self-contradictory: my complaint is not with all mental models in general, but with the segment of discourse[0] that tries to point them out as a special thing. In fact, it's that selection into a group that invites the fallacious evaluation of the whole group.

Furthermore, I think you've touched on something interesting with the point about cataloging or listing Mental Models. Part of it is the aforementioned point about inviting judgment on the group as a whole, but that's not all. It's a really odd collection of entities to put in one basket. Mostly all they have in common is that they exist in the mind, so what gets included in practice seems kind of arbitrary. In particular, putting them all on (even superficially) equal footing is dangerous.

[0] Please forgive me if I'm inventing or misusing this term. It seemed like a good idea at the time.


Mental Models are nothing more than useful hueristics to help you navigate and understand reality.

Whether a mental model ends up with a name on a list is largely down to it's utility in aiding that for a wide audience.

In my experience they have been extremely useful.


Well, the premise of mental models comes from the fact that we never perceive reality as is. We are always building models of the world, not actually "seeing" the world.

Our senses are filtering machines (i.e. reality tunnels). We only perceive what is deemed useful, or is available according to our capacities. Thus in essence, we are just creatures of models.

Mental models are just a more precise awareness of that phenomenon. It also sounds cool and sticks with people because mental models are focused on actually being seen (mentally). As you say, mental models rise from the insight that map is not territory, and that we have no territories in our minds, only maps.


That's yet another valid use of the term "mental model", but not really the usage in TFA.


Fwiw, I found the linked article on "The Map Is Not The Territory" an interesting read.

https://fs.blog/2015/11/map-and-territory/


The origin of that idea is in this sequence of blog posts, written in roughly 2009:

https://wiki.lesswrong.com/wiki/Map_and_Territory


The origin of that phrase is actually Alfred Korzybski - https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation ; also see Magritte's The Treachery of Images etc., the idea is not recent.


The notion of mental models dates back at least to Plato's allegory of the cave. It is not new.

It's a frequent topic of philosophical discourse, and a major component of cognitive psychology.


What's TFA?


"The fine article".

Sometimes the word "fine" is spelt differently.


The astronaut angle is just stupid clickbait.

If you want to learn habits from people who make on-the-fly life-or-death decisions, there are much bigger pools of people who do that, who have experience much more relevant to what ever it is you're doing, than people who managed to survive in a space-can[1].

More applicably, the most common life-or-death decision made by people reading this is how much attention they pay while driving. Modeling that in a way that makes you better at it would likely provide substantially more, er, lack of bang for your buck than worrying how Armstrong made particular choices.

[1] I don't mean to belittle astronauts - they're clearly exceptional in their own way. I'm questioning how relevant their unique experience is to others' lives, and what other relevant experience they far have that bigger pools of other people don't.


> The astronaut angle is just stupid clickbait.

It's not an "angle" and it's definitely not clickbait -- they're excerpts from a book written by an astronaut.

> the most common life-or-death decision made by people reading this is how much attention they pay while driving.

Yes, sure, they should pay attention while driving, but why can't they also learn lessons from astronauts?

> managed to survive in a space-can[1] > [1] I don't mean to belittle astronauts

Sorry, it feels like you are. There's a lot of interesting and useful lessons in the article and the book, and you're kinda dismissing it just because it's from an astronaut.


I saw a BBC documentary where they interviewed every astronaut who had walked on the moon who would speak to them about life. Most were total and utter cranks! Strangest thing I've ever seen. You can understand why once you've seen it. Peaking at a young age, the incredible stress of being a test pilot and watching so many colleagues die through no fault of their own. The adulation for surviving, which really wasn't up to you. Etc. I have huge sympathy for all the Astronuats - not that they'd want it and good for them.

But I'm not taking an astronauts advice on literally anything because they are or were an astronaut. Doing that ends up in the realms of spoon bending. If an astronaut has an idea it stands or falls on the merit of the idea. If you want a heuristic, most astronaut's ideas aren't worth your time. It's a sad conclusion to come to and I'm sure it's unpopular because we love having glamorous heroes but there it is.


I agree with your criticism of hero worship, and the personal issues with the early pilots / astronauts, but modern space flight is nothing like that.

It's still dangerous, but our success rate with rockets is far, far better than it was in the 1960's. Complex systems, as Hadfield notes, succeed because of diligence and cooperation, not luck and cowboyism. Most astronauts in the 21st century don't have 1-in-4 of their colleagues killed on the job.


Astronauts are still the monkeys in a can. The success or failure now as it always was is in the engineering team, not the warm body who (to paraphrase Carl Sagan) is /still/ really only present for PR purposes rather than to optimise the advancement of technology and science.

Head of space flight engineering or whatever at NASA - yeah? Heuristic would be listen to them, they might know something. Monkey in the can on top of the fire-cracker? If they do know something it's more coincidence with their usefulness in PR than anything else. Sad but true. If NASA abandoned human flight tomorrow they'd probably get a lot more bang for buck - other than in PR because we love us a person to fuel our imagination and I am no different there. How flippin' cool is it ride the fireball into space?!?


You are wildly uninformed when it comes to astronaut roles. Could the ISS be automated? Perhaps, but at astronomical costs. And most tasks would still require remote operation. Also many experiments are done precisely on humans themselves.

In general modern astronauts have many roles -- flight managers, technicians (doing and evaluating repairs), scientists, experimental subjects, and finally yes communicators to an extent (frankly great to inspire people). Obvious Hadfield isn't a perfect human being and I'm sure he doesn't like adulation, but his vast set of skills, including communication effectiveness is quite impressive (and music ability -- check out his space video clip!). I assure you he's no crank (perhaps you're biased by early space program people that were test pilots). I recommend checking out his and other ISS youtube videos to get an idea: https://www.youtube.com/watch?v=8Hj3GnPRsJ4


Compare the costs of automation development vs the cost of life support systems development, launch and maitenance and it's not even close.

Yeah I know it sucks when someone poo-poos on your (and my) heroes. I know. "For a successful technology... " etc. Feynman wasn't perfect either. Astronauts continue to be selected according to an ideal of how they will come accross to the public, from Alan Shephard on. And there was something very impressive about all of them that played well in the media. It took 30 years to find out so many of them are also pretty weird and not people you'd take scientific advice from.

Experiments on humans in space are mostly interesting in research for space life-support. You do different, arguably vastly more bang for buck experiments if you're not using humans to do them.

The space race was literally cold war propaganda (and wonderful, beautiful stuff at that - vastly better than most propaganda). NASA continues to put PR front and centre. You can disagree, that's fine, decide to what extent the PR is influencing you and if you're sure it's minimal I can't ask for much more, right?

The ideal would be if NASA had basic science front and centre rather than the incredible waste frittering away vast wealth on the pretence of children's astronaut dreams. And I want a flying car too but reality should win out.


Oh here we go -- this is a vast subject (and I understand some of your questioning), I wish I had time for a more thorough analysis, but here are some key points.

- Humans are actually fantastic space exploration hardware.

We already have robotic vehicles exploring other planets. They are not autonomous, and have glorified Arduinos in them. Not for lack of money or development, but because it is extremely difficult to do autonomous exploration, and the hardware is essentially currently impossible because there's nothing good enough, and not even remotely close to good enough that can survive radiation and space environment. Exploration currently is extremely slow, done through remote control, with up to 24 minute delays (Mars).

- Humans can self-reproduce with low-tech inputs

We are perhaps from >50 to >>100s years away from machine compact self-reproduction, then imagine self-reproduction with human-like intelligence. Only humans can make colonies for the near future.

- AGI (or just good enough scientist AI, explorer AI, etc) is uncertainly distant

It could be 20, or it could be 100 years or more away (if our civilization even survives that long before wars/climate change/etc), economically viable AGI is uncertain, economically viable compact radiation-resistant AGI is vastly uncertain.

- It is worthwhile to bet on long term space exploration and habitation

This is the most difficult case. I think it's worthwhile to devote resources on long term space exploration and habitation. In particular research since it is so expensive right now and we still experiencing rapid technological change. We want to eventually live on other planets (and even other star systems). The other aspects are complementary, like inspiring people to work in engineering and science; the main thing is giving hope for the future, even if it's a future you won't live.

Most of the research anyways finds applications in terrestrial systems (remember that we still do pure Mathematics, whose research value tends to crop decades later or more).

---

It is expensive, yes (ISS costs about $3 billion per year for NASA apparently), but overall I think it's a tiny investment for great returns (at varying levels of risk).

---

Finally, to expand on hope: hope is great things many things, but one of them is simply the way people act on expectations of the future. If they don't expect to live long, or a long term future worth living for other generations, you expect we will act more selfishly and greedily, a self-fulfilling prophetical aspect. Thus it's extremely important not to neglect sustainability and somewhat distant dreams.


I think you are working from a false premise.

Modern astronauts are many things, monkeys in a can is not one of them. They have many roles and skills ranging form engineering to teaching and many things in between.

They perform engineering and procedural review. Carry out public outreach that inspires scientists and engineers of the future. Manage operations during launch and in space operations. and are involved in most parts of approval of new equipment they will be using.

Yes they have a PR role, but getting photographed on the top of a rocket is a very small part of that.


They're only present for PR purposes? Yeah, um... let's see if I remember right...

Apollo 11: Had to make a visual adjustment because the pre-mission selected landing area was a boulder field. We didn't have computers that could do visual field processing in 1969. Without the humans, we could have lost the lander.

One of the Apollos (13?): The guidance system was all messed up (maybe had reset after the explosion, and reset to being on the pad?) One of the astronauts manually shot the stars to get correct current position data to input into the guidance system. Without this human intervention, the mission would have been (literally) lost.

So... yeah. Not exactly monkeys in a can, just there for PR.


Which documentary? I'd really like to see it before passing any judgment. There are a lot of really sharp astronauts out there, current and former.

NASA also employs a confidential psychological evaluation process for astronauts. I have heard some argue that the process has been biased, in the past, toward outcomes that may produce astronauts who, for example, struggle--relative to their other strengths, that is--to keep a handle on the emergence of emotion or complex experiential outcomes. (And I would mention that many of us humans down here on earth fall into this boat as well)


It was BBC world. It was a series, one moon walker per episode and I'm guessing 20 years ago. I'm not turning up the needle in the bbc astronaut haystack for you I'm sorry.

There really should be a list of every BBC doco somewhere. Not turning that up either.



That is cool, thank you! But still not turning it up. Grr.


Reminds me of the book The Right Stuff by Tom Wolfe. He describes the start of the US space program, being a test pilot, public reaction to being a astronaut... Astronauts were treated like gods back then, or as Wolfe called them "single combat warriors". I think a lot of that sentiment still exists today.


I've just discovered "Mental Models". It's really weird putting a name to the mechanisms I've used to explain and reason for as long as I can remember.


Smell BS ? It is a new Buzz word. Just wait, it will pass.


Not everyone has such a verbose vocabulary to describe the intricacies of the human mind and the mechanisms it uses to map and navigate the world.

For a child "I pictured it in my head" is equivalent to an adult saying "I have a mental model of it"


The term was coined in the 1940s so it is not likely a buzz word.


The recent usage of "mental model" is. The fact that the term had a more useful meaning before makes its buzzwordification all the more irritating. Before a mental model was just a way you thought of something. Now they're Named Things to be shoehorned all over the place. Imagine if "design pattern" had been a casually but widely used term before Design Patterns came out (maybe it was, I wasn't around then).


So... What would you call them instead?


I would probably try to find a more specific term for the specific idea I'm talking about rather than try to use one term for all of them. That said, "heuristic" is a good term for a lot of them. "Framework" is another one that came up in this thread.


The entire reason to call them "Mental Models" is to aid in their communication and discovery.

There are many hueristics that are not mental models. There are many things that are frameworks that are not mental models.

Labelling something a "mental model" as used in this context serves the purpose to communicate "here is an abstraction with unusually good power-to-weight ratio that you can leverage to aide your thinking/reasoning about situations that meet a given set of criteria".

It's actually a very specific thing.


Less BS, it's just vocabulary that isn't widely circulated. Possibly due to a "rationalism" fad. I'm looking at you CFAR.


Promoting diverse and inclusive ways of thinking is a buzzword now?


I've heard the word "heuristics" more recently, but I think I learned (and later forgot) both terms when I was a kid.

One interesting thing about heuristics is that it's important to learn when they _don't_ apply. If a heuristic is true 80% of the time, it's also untrue 20% of the time.

And the obverse to that thought is that you shouldn't let that 20% fail rate discourage you from using the heuristic the other 80% of times.


The "mental models" stuff just seems like gobbledygook. Hadfield is giving regular kind of advice you read everywhere from Sun Tzu to John Wooden:

    - Don't be selfish
    - Learn from your mistakes
    - Expect the unexpected
    - Etc.
Doesn't really seem like Earth-shattering stuff.


It's not so much earth shattering stuff so much as confirmation. These are not easy lessons for many people due to various psychological factors and history. It helps push people in the right direction when the message is given by someone that they respect.

An interesting part of leadership can be repetition, so this feels reasonable.


Is this confirmation useful? Or stated another way, would you feel less confident about aphorisms like "don't be selfish" and "learn from your mistakes" if a famous astronaut said they were bad ideas?


It all depends on the person, who they look up to, their history, their biases. A selfish person that values space travel may ignore all other sources of information, but the astronaut saying it may cause a moment of reflection.

Now, is this ideal? No, we are a strange-strange species.


I might be more confident of them if a famous billionaire said they were bad ideas...


There is a qualitative difference from model to model. I have a collection of hundreds of my own models, some of which are short and elegant, while others would print out into books easily (if edited down).

It is rare indeed to find a non-obvious, useful mental model that can be boiled down to 5-10 words without losing significant leverage. However the _reason_ why they are boiled down to anything at all is due to a communications burden, such as that confronted while compiling mental models for publishing. That the author offers links to find out more is, I think, a reasonable effort at dealing with that problem.

Let's say your friend asks you to summarize the value of _Jane Austen's_ works while you drive them home from the bookstore. You have been asked by your friend to build a mental model. How much reduction of your experience with the books is reasonable, and where does it lose leverage as a result of the reduction? Yet you have an opportunity to help someone make a great new discovery. This process can be very frustrating for the builder of the model.

So while I appreciate a high-quality critique of a bad example of something, I have to say it seems like those examples are a bit reductionist and possibly unfair given the sheer number of mental models out there for examination, and the reasons for, or conditions under which they were selected for publishing or communication.


I did some work with a Design Anthropologist a while ago, in that field a Mental Model was used to describe how particular people think, and that was used to help solve their problems. That seems like a good use of the term.

The way the term is used in the article, I would rather call a thinking style, or simply a way to think.

I think there is value in them, but often they are pretty bloody obvious and not in anyway new.

Circle of Competence is just playing to your strengths.

Second Order Thinking is just imagining consequences.

Generally they can be boiled down to a few question we can ask about a situation.

What about this problem am I uncertain about? What assumptions am I making? How might I increase confidence in my solution?

Anyway...they are here for a while.


When I hear mental model I am thinking more.of the lines of perception. The value of a mental model is understanding it. To the extent of replicating or teaching it. It is a model. A model is defining a set of relatable understanding to another. Finding a commen sense set of abstractions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: