Hacker News new | past | comments | ask | show | jobs | submit login
Mental Models I Find Repeatedly Useful (medium.com/yegg)
929 points by orph on July 6, 2016 | hide | past | favorite | 189 comments



Interestingly, I find my favourite nitpick: Ockam's razor. The article quotes it as "The simplest solution is usually the correct one". This is a common misinterpretation of it and it's interesting that the quote links to the wikipedia page that has a better statement: "Among competing hypotheses, the one with the fewest assumptions should be selected."

The key problem is equating simplicity with correctness. This is usually disastrous. Once you feel that something is "correct" you stop looking for ways to falsify it. That's the exact opposite for what Occam's razor is used for.

Instead, if you have 2 competing hypotheses (two hypotheses for which the evidence supports both), you use the one with less assumptions. Partly because the one with less assumptions will be easier to work with and lead to models that are easier to understand. But mostly because less assumptions makes it easier to falsify.

Abusing this principle outside of the scientific method leads to all sorts of incredibly bad logic.


Very interesting.

From [1]:

Famously, Karl Popper (1959) rejected the idea that theories are ever confirmed by evidence and that we are ever entitled to regard a theory as true, or probably true. Hence, Popper did not think simplicity could be legitimately regarded as an indicator of truth. Rather, he argued that simpler theories are to be valued because they are more falsifiable. Indeed, Popper thought that the simplicity of theories could be measured in terms of their falsifiability, since intuitively simpler theories have greater empirical content, placing more restriction on the ways the world can be, thus leading to a reduced ability to accommodate any future that we might discover. According to Popper, scientific progress consists not in the attainment of true theories, but in the elimination of false ones. Thus, the reason we should prefer more falsifiable theories is because such theories will be more quickly eliminated if they are in fact false. Hence, the practice of first considering the simplest theory consistent with the data provides a faster route to scientific progress. Importantly, for Popper, this meant that we should prefer simpler theories because they have a lower probability of being true, since, for any set of data, it is more likely that some complex theory (in Popper’s sense) will be able to accommodate it than a simpler theory.

Popper’s equation of simplicity with falsifiability suffers from some well-known objections and counter-examples, and these pose significant problems for his justificatory proposal (Section 3c). Another significant problem is that taking degree of falsifiability as a criterion for theory choice seems to lead to absurd consequences, since it encourages us to prefer absurdly specific scientific theories to those that have more general content. For instance, the hypothesis, “all emeralds are green until 11pm today when they will turn blue” should be judged as preferable to “all emeralds are green” because it is easier to falsify. It thus seems deeply implausible to say that selecting and testing such hypotheses first provides the fastest route to scientific progress.

[1] http://www.iep.utm.edu/simplici/#SSH4bi


The second quoted paragraph seems to be attacking a strawman. I don't think it was suggested that we should add silly details to improve falsifiability, but rather remove them. Moreover, it seems like this is a way to choose between existing theories, rather than a way to mutate one theory into a better one.


It's pointing out that the equivalence of "simpler" with "more falsifiable" is not perfect. Nobody is suggesting that we just add silly details for the sake of increasing falsifiability, but suppose two research groups independently arrived at those competing theories. Should we choose the simpler one or the more falsifiable one?


A simpler explanation for valuing simplicity is that a simpler theory requires less storage/processing of the human brain. A more complex theory could explode in complexity so that all of its parts and ramifications wouldn't be easily learnable by a human.


One way to think about Occam's razor is from a probabilistic perspective. Consider the Conjunction Fallacy -- for any two events the probability of both events occurring together is less than or equal to the probability of either one occurring alone. Yet it often makes intuitive sense to people that the more specific conditions are more probable than the general one. (See examples in the wikipedia page: https://en.wikipedia.org/wiki/Conjunction_fallacy)

So the more assumptions you're adding to the hypotheses the more you're getting taxed on the likelihood of it being correct. Therefore it's more likely that the hypothesis with the fewer assumptions to be correct.


Occam's razor, as stated by GP is not about correctness, but tractability. In fact, if you look at it probabilistically, the hypothesis founded on more assumptions is more likely to be correct:

Suppose you have a hypothesis, H, which is based on assumptions A1, A2, ..., Ak. This can be phrased logically as an implication:

    (A1 & A2 & ... & Ak) -> H
Decomposing the implication, we get:

    !A1 | !A2 | ... | !Ak | H
Then

          Pr(!A1 | !A2 | ... | !Ak |  H)
    = 1 - Pr( A1 &  A2 & ... &  Ak & !H)
So, appealing to the conjunction fallacy, assuming that we are adding more assumptions on top, rather than having a greater number of different assumptions, the probability of success actually goes up.


> (A1 & A2 & ... & Ak) -> H

This is backwards. It should be

    H => (A1 & A2 & ... & Ak)
It's not "if these assumptions hold, the hypothesis is true". It's "for this hypothesis to be true, these assumptions must hold".

Suppose you have the hypothesis that Bruce Wayne is Superman. Then you see the two of them in the same room together. It's still possible that Bruce Wayne is Superman, but only if he has an identical twin. Your credence that Bruce Wayne is Superman should decrease accordingly.


At least in the terminology I'm used to, of mathematical proof, an assumption is a part of the context under which a thing is proven. So having more assumptions weakens the claim (and there is an associated weakening rule [1]).

In other words, the claim "Assuming Q, I prove P" does not mean (to me) that Q must hold in order for P to hold, but rather that one way to show that P is true is to show that Q is true.

[1]: https://en.wikipedia.org/wiki/Structural_rule


Umm .. no.

Assumptions are the left-hand side of an implication, by definition. (And the right-hand side is called "conclusion".)

The relevant statement here is not "for this hypothesis to be true, these assumptions must hold".

It is: "for this hypothesis to be derived this way, these assumptions must hold".

There is always the possibility that a hypothesis can be proved in a different way from different assumptions.

Unless, of course, your theory not only proves "(A1 & A2 & ... & Ak) -> H" but "(A1 & A2 & ... & Ak) <-> H". That is, if your theory shows that your hypothesis does not only follow from the assumptions, but is equivalent to its assumptions. That's quite a rare case, though.


If you see Bruce Wayne and Superman in the same room, then "Bruce Wayne is Superman" can only be true if you assume something you didn't have to assume before. It means you should be less confident that Bruce Wayne is Superman.

I'm using the word "assumption" in a natural way. (Also in the way that it's used in Occam's razor.) If you have a definition that says I'm using it wrong, then your definition is silly.


This example is totally unclear to be. Although you declared a clear hypothesis in your very first comment, it is totally unclear what exactly your assumptions are that would lead to this hypothesis.


You can form a hypothesis without basing it on anything. You could for example randomly generate 1 billion sentences and then try to test if they are true.


This is not what is meant by "hypothesis" in Occam's razor, which is about hypotheses that are based on actual assumptions (and using these assumptions to pick a "best" hypothesis).


Okay, it sounds like what you call assumptions, I would call "data". Or "background data" or something.

If I think Bruce Wayne is Superman, I might base that on the fact that they're both physically very fit; that one would need to be very rich in order to have the kind of technology that is indistinguishable from alien powers; that Bruce Wayne's parents were murdered, and this could conceivably draw him to a life of fighting crime, which is a thing Superman does.

That sort of thing leads me to form the hypothesis: "Bruce Wayne is Superman".

But that sort of thing isn't what Occam's razor is about. It's about things that we haven't observed to be true, but which would need to be true for the hypothesis to hold. You should prefer a hypothesis that requires fewer such things.

If I see Bruce Wayne and Superman in the same room, then in order for Bruce Wayne to be Superman, he must have an identical twin. I haven't observed him to have one, but that's what the hypothesis requires. Accordingly, my confidence in the hypothesis decreases.


The initial hypothesis is only a starting point. When building a model where 'mice are smarter than humans' you need to account for all the evidence out there.* Compared to the model where 'humans are smarter than mice' it's vastly more complex or vastly less testable.

* I have heard this defined as hypothetical baggage or implicit baggage. ie. if CO2 is not increasing temperature then why not?


Sorry for the mathematical nitpick here, but that seems to me like a strawman. You silently moved from the original question:

    What is the probability that the hypothesis is correct?
To the very different question:

    What is the probability that the implication "from the assumptions follows the hypothesis" is correct?
Moreover, this different question has a clear answer for every logically consistent theory: It is 1, because it is always true!

Why? Because that's exactly what the theory proves logically. The theory can't tell you whether A1, ..., Ak are all true in the real world, but it does tell you that _if_ these are true, H is also true.

So this is really a typical strawman argument (although maybe unintendedly): It is different from the original question, and it boils down to a trivial but misleading answer.

------------------

Going back to the original question, you'd have to compare the two hypotheses H1 and H2, where the set of assumptions of H1 are a strict subset of the assumptions of H2:

    A1 & A2 & ... & Ak -> H1
    A1 & A2 & ... & Ak & ... & An -> H2
It is clear that:

    P(A1 & A2 & ... & Ak) > P(A1 & A2 & ... & Ak & ... & An)
But from here it is surprisingly hard to conclude "P(H1) > P(H2)", because we have implications and not equivalences. That is, H1 may be true even though the assumptions don't hold. It may be true for different reasons and derived from a different set of assumptions that turn out to be true. Same for H2. So we need to take into account the probabilities for H1 and H2 to be "true for different reasons", which we'll name Pd1 and Pd2:

    Pd1 = P(not(A1 & A2 & ... & Ak) & H1)
    Pd2 = P(not(A1 & A2 & ... & Ak & ... & An) & H2)
To prove the probability variant of Occam's razor, we need to make the following additional meta-assumption: The probabilities that H1 and H2 are "true for different reasons" are very small, and moreover almost identical. So we have:

    Pd1 = Pd2
But with that meta-assumption, we can finally prove the probability variant of Ocamm's razor, as we can now express P(H1) and P(H2):

    P(H1) = P(not(A1 & A2 & ... & Ak) & H1) + P((A1 & A2 & ... & Ak) & H1)
          = Pd1 + P((A1 & A2 & ... & Ak) & H1)
          = Pd1 + P(A1 & A2 & ... & Ak)
          = Pd2 + P(A1 & A2 & ... & Ak)
          > Pd2 + P(A1 & A2 & ... & Ak & ... & An)
          = Pd2 + P((A1 & A2 & ... & Ak & ... & An) & H2)
          = P(not(A1 & A2 & ... & Ak & ... & An) & H2) + P((A1 & A2 & ... & Ak & ... & An) & H2)
          = P(H2)
In short:

    P(H1) > P(H2)


You are right, they are different questions, but the straw man was not intentional, I thought the original phrasing was ambiguous enough that it could be interpreted in both ways ;)

In other words, it was unclear to me what the answer to the question "Are the assumptions part of the hypothesis?" was. If, as I did, we assume that "yes, they are" then I don't think it follows that the probabilities will both be `1`, because we do not have logical proofs for the claims, the implication could only be true in the model (they are not necessarily entailments).

The waters are muddied further still when the hypothesis itself is phrased as an implication.

EDIT

It also strikes me that for your line of reasoning to hold, it is not sufficient that Pd1 = Pd2 are small, but instead `Pd1 = 0 = Pd2`, in order to justify this line:

    > = Pd1 + P((A1 & A2 & ... & Ak) & H1)
    > = Pd1 + P(A1 & A2 & ... & Ak)
Which is tantamount to saying

    (A1 & A2 & ... & Ak) <-> H1
  & (A1 & A2 & ... & Ak & ... & An) <-> H2
Is it not?

EDIT (2)

Ignore that, it is not tantamount, it is a weaker condition.


Maybe the final part of the proof I gave is a bit dense, so here are some additional notes.

First of all, if you know that

    (A1 & A2 & ... & Ak) -> H1
then the following two terms are logically equivalent:

    A1 & A2 & ... & Ak
    (A1 & A2 & ... & Ak) & H1
Also, for the proof which I gave it is sufficient that Pd1 = Pd2. It does not need them to be zero.


Ah yes, I see. I guess I was looking for a place where the fact that `Pd1` and `Pd2` were small, but I guess that's not necessary.


No, the "both are small" was just meant to be a justification for assuming Pd1=Pd2.


Here's how I explain it to laypersons: "given two hypotheses, Ockam's razor tells you which one to test first"


Interesting. Usually, the one I test first is the hypothesis Hn where P(Hn=true)/resources_required_to_test_Hn is the largest.


I meant this as an informal rule, i.e. the "spirit" of the law. I think your suggestion is fairly close, in effect, as you can consider assumptions to imply greater resources (namely in validating them).


AKA lowest hanging fruit first.

Never a bad idea


This is a good point. If you went with the "simplest" explanation then which one do you pick:

1. The patients humors are out of whack. The treatment is bloodletting.

2. The patient has a complex infection involving many physiological systems like immune system, foreign bacteria, gut flora, etc. The treatment is rest and administration of a lab engineered antibiotic for weeks.

Or:

1. The patient is possessed by a Djinn/Demon/spirit and needs an exorcism from a priest/shaman/imam.

2. The patient suffers from mental illness which is difficult to describe let alone treat. Treatment will be years, if not decades, of a mix of therapy, lifestyle changes, and medication.

Sadly, these attitudes still exist, even in the industrialized West. I often visit /r/paranormal because I have a thing for ghost stories and sometimes there's a posting about "possession" which is very clearly about a mentally ill person. When I point this out and ask why this person isn't getting proper care, I'm downvoted to -5 near instantly. Yes, that's right, the guy saying "This isn't a demon, this poor woman needs proper medical help," gets argued with like its the 13th century.


Ockam’s razor can’t tell you which theory is correct, just which one to use when you have more than one theory that accounts the data. It is purely a pragmatic way to rank competing theories.


I find it interesting that now Ockham's razor moved from philosophy to applied statistics. That is, Bayesian statistics can quantify (a more complicated model which fits just as well has, or only slightly better, has lower likelihood) and in machine learning we use it in practice (to avoid overfitting, as too complex models may fit well to the training data, but be suboptimal for generalization).

See also BIC (Bayesian Information Criterium) for selecting models.


My conclusion on reading this discussion is that if a bunch of smart people can't agree on what Occam's razor is useful for, it isn't useful!


>> drzaiusapelord ,scoot

Occams razor does not ask that you accept the simplest explanation. It asks that one take into account as many, and only as many factors as necessary to explain a phenom. It does not promote fallacy or lessen rigour. It is a "loose leash but a tight chain"

As originally defined, it stated: Entities should not be multiplied without necessity(Entia non sunt multiplicanda praeter necessitatem).

Bertrand Russel held the principle in high regard. This quote from Newton encapsulates its application: "We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances." and simplified for scientists in this form: ""when you have two competing theories that make exactly the same predictions, the simpler one is the better" http://math.ucr.edu/home/baez/physics/General/occam.html

There is a line of scholarship that believes William of Occam (c1287-1347) never made the quote attributed to him. http://www.logicmuseum.com/authors/other/mythofockham.htm

What is termed Occams razor by vog asQuirrel asmad and others is a statistical/logicians derivative not really of concern to most people.


You do make one assumption however, which is: "discussions on the internet, lead somewhere".

Which they don't, therefore you can't say Occam's razor isn't useful. :) funny


I'd say the thing with Occam's razor is that its easier to disprove a simpler answer with less assumptions, thus its easier to place more thrust in it if it does hold up to the same scrutiny as answers that have more assumptions and complexity.

Also, i like Hanlon’s Razor. "Never attribute to malice that which is adequately explained by stupidity." Generalizing here, but people _are_ stupid.


this is a info for test.. sorry


I wrote something on my white board this weekend that is similar in concept: forced efficiencies at random intervals. I hypothesize that systems which have the least moving parts are less likely to suffer "breakage" if the infrastructure on top of which they run is randomly reliable and/or stingy with resources. Also see Gates/Page's law: https://en.wikipedia.org/wiki/Wirth%27s_law


That article is missing the classic formulation: "What Intel Giveth, Microsoft Taketh Away"

http://exo-blog.blogspot.com/2007/09/what-intel-giveth-micro...


Maybe because Moore's law is reaching it theoretical limits?


Actually the law is better translated as: "More things should not be used than are necessary."


Except that you never know what is truly necessary or not since you are making assumptions.


Think it like this:

In the search for a cure for head ache, some guy took aspirin and did a magic rite, and he was cured; another guy just took aspirin and he was cured. Both account can be reproduced, and so far have worked. Therefore when doing analysis you can ignore the bit about magic, while it may be relevant in some mystical sense (the spirits are more happy if you do it, whatever), it is unnecessary to explain the cure from head ache.


I took a class in College where the Professor revolved the class around Occam's razor. Every quiz we would have a word limit associated with each question. It was honestly a quite difficult exercise to concise your answer and remove unnecessary information.


Genuine question: so why do physicists explore a theory that posits 11 dimensions, if the evidence at hand does not require 11 dimensions - i.e. if it is additional assumptions about the Universe beyond what has been collected. (This question refers to string theory.)


Well first you have to come up with at-least two theories that _both_ explain the phenomenon. Then you apply the principle to choose one that's "simpler".

You cant do a 'premature' optimization and not even attempt to piece together a theory that sounds complex, when maybe that is currently the only theory that explains the phenomenon.


I'm not a quantum physicist so take this with a grain of salt, but my favourite analogy is heliocentric vs geocentric orbits: http://i.imgur.com/AReqgfP.gif

The collection of data can have assumptions built in that you are unaware of. If you aren't thinking about the earth moving when measuring the orbits of the other planets the data shows that the orbits are pretty crazy.

Likewise when we measure at the quantum level things seem pretty crazy, things in two places at once, etc. Saying "Once something is small enough it can behave completely differently from what we observe at larger scales" is a pretty big assumption.

String theory trades some assumptions for others to rationalize some of the 'crazy behaviour' at small scales.


This is the major criticism of string theory.

https://www.google.com/search?q=occam%27s%20razor%20string%2...


If you extend OR to theories you should select the one that has the least ad-hoc hypotheses (or the smallest set of auxiliary hypotheses in the Lakatos sense). In other words one could argue that theories get "worse" with more auxiliary hypotheses so from a scientific process point of view if you diligently falsify and force the creation of more auxiliary hypotheses you can weaken a theory enough for a (more elegant) alternative to take its place.


More specifically,

> two hypotheses for which the evidence supports both

If they are supported by the same evidence, then the truth that is being supported must be the same. The simplest hypothesis is therefore the smaller nutshell that captures that truth. The other one is bloated, and bloated information (what this is all about) punishes us with complexity and irrelevance.

Fundamentally, any theory is an abstraction of evidence. Ockham's razor is about the quality of said abstraction.


Still, how do you define "simplest"? This seems quite arbitrary.


It is easy to do in many cases. Take the following two theories:

Rocks falls downwards since masses attracts each other.

Rocks falls downwards due to invisible ghosts making masses attract each other.

Both of these theories have the same amount of evidence supporting them, but one is strictly simpler than the other. If one theory isn't a simple reduction of the other then Occam's razor doesn't apply, but in those cases it is usually possible to make experiments which disproves one of them.


The invisible ghosts are called "gravitons" in mainstream physics community :-)


Less is not arbitrary. Be it fewer assumptions, variables, bits, rules... And these are all abstractions. Abstractions can be counted, so all simple really means is fewer abstractions. Theories tend to refine themselves because ultimately we arrive at one word, and with new evidence, even words will adapt. What was gravity yesterday is graviton today.


In a sentence, "What is?" versus "What do I do?".

Occam's Razor is a useful -- but not fool-proof -- tool for the latter.

It says nothing definitive about the former. At best, it makes a very broad statistical generalization.

If it's actually statistical (measured), and not anecdotal.


Couldn't adding more assumptions also make it easier to falsify? Like if the assumptions obviously lead to inconsistencies or if the added assumption is required for the hypotheses and happens to be easier to test in isolation, then one could throwout the hypothesis while only checking a single assumption.


Fewer assumptions = special cases and exceptions stand out more and are harder to integrate without altering the theory beyond recognition. More assumptions = more moving parts.

Because the idea is that the theory will be tweaked anyway. Nothing's gonna be perfect on the first try.


I think a more accurate interpretation is "the simplest explanation is the likeliest". It follows as a consequence of Solomonoff induction.


But it is not about what theory is most likely. If two theories or models generate exactly the same predictions, they are equally true. Occam just says that in this case we should chose the simpler.

But if the theories produce different predictions, then it doesn't matter which one is simplest, the one which most closely match reality is the truest.


In practice we are often in situations where multiple hypothesis explain the data we observe but diverge from each other on data that we have not yet observed and it may be difficult to create the necessary situations to tell between them. In these cases we may still wish to choose a hypothesis to make predictions from.

There are an infinity of possible models to choose from, with most of those models containing no less information than the phenomenon they seek to model. Predictive power is what is important for models; a model that has enough dials to be adjusted to work with any new piece of data might be 'correct' but it is not useful. My favourite example of this is the fact that when the heliocentric model of the solar system was being developed, the geocentric model was providing much more accurate values for the positions of celestial bodies for a long time because it had had years of being tweaked to do so. Initially, it was the simplicity of the heliocentric model rather than its accuracy that was appealing.


If we already know the answer, then we don't need a hypothesis. The only reason to choose is because there are different predictions of the unknown.


"simple" explanations are quite often the ones with the most assumptions so I don't think your interpretation is accurate.


"Simple" here has a particular meaning, related to Kolmogorov complexity.


Could you clarify what you mean by that?


Take high school physics, where you assume that there is no resistance, cows are spherical, gravity is exactly the same everywhere, and so on. Assuming all that, you can make a much simpler model than when you don't.


Two adjustments that I would make.

Remove Metcalfe's law. It is a massive overestimate. See http://www.dtc.umn.edu/~odlyzko/doc/metcalfe.pdf for the better n log(n) rule for valuing a network.

And I find Le Châtelier's principle generally applicable, and not just to Chemistry. It says that if you observe a system at equilibrium, and try to induce a change, forces will arise that push it back towards the original equilibrium. It is one thing to recognize this at work in a chemical reaction. It is quite another to be blindsided by it in an organization.

See http://bentilly.blogspot.com/2010/05/le-chateliers-principle... for my explanation of why this holds in general outside of chemistry.


One thing I question with Le Châtelier, is how do you know which equilibrium will be pushed back towards?

Humans have a desire pro-speed and anti-risk. Supposedly, if you introduce seatbelt laws, speed increases and risk stays the same. How do you predict that in advance? Why doesn't risk decrease and speed stay the same? Or why not speed increase a bit and risk decrease a bit?


Push it back towards does not mean that it necessarily arrives at its original position. Just that it didn't wind up as far away from it as you'd naively hope.


It means there is some control the system has over its variables, and it will move to an optimal position based on that. Humans can adjust their driving speed, and they want to maintain a certain equilibrium of risk. When you add seatbelts, humans will try to move back to the optimal risk level they set. Normally they won't go all the way back, but they'll go some of the way.


Ugh, maybe I'm the only one but I don't find this list useful. Not because it isn't interesting, but the implication that it will actually make you smarter. The problem today isn't information, it's knowledge. Even if you can correctly and fully understand all these models, something that could take years, you still most likely wouldn't be able to implement them, especially when they are in conflict with each other.

I think it's a much better idea to study things like critical thinking, practical reasoning and operational leadership. Back in the day hacker values stated that you could ask for directions, but not for the answer. Because the process itself was as important as the answer. Not just for amusement, but because there might not be a right answer and the next time you're confronted with a similar problem you now have some experience of making those decisions.

A great deal of "stupidity" in technology these days seem to stem from schools that promote check box answers to complex problems and the popularity of these "laws" that make people so sure of themselves that it prevents them from proper reasoning.


This list is useful for people who are already used to thinking in these ways, intuitively, by being exposed to other people and learning by osmosis. Formalizing intuition leads to easy growth.

For other people, you're right, it's about as useful as reading through a list of course descriptions rather than taking the actual courses.


"This list is useful for people who are already used to thinking in these ways, intuitively, by being exposed to other people and learning by osmosis. Formalizing intuition leads to easy growth."

That's exactly what I'm questioning though, if that's the right way to learn things. Especially with the premise of the article, there's a risk that people are just "collecting facts" to be used as anecdotes to avoid reasoning.

"For other people, you're right, it's about as useful as reading through a list of course descriptions rather than taking the actual courses."

Somewhat ironic I read course curriculums all the time to figure out which subjects are covered and what are good beginner books.


I think everything on this page is about encouraging reasoning rather than just collecting facts. If someone takes each one as a truism and blindly follows it, then yes, it would turn these mechanisms (which are supposed to help us identify and fight our own cognitive biases) into just another form of cognitive bias. That's why they're not laws, they're just handy patterns to help us reason about things that we see every day.

I'm not sure what you mean by using them as tools to avoid reasoning- they are explicitly meant to help with reasoning. I don't know what sort of reasoning could be done without incorporating any sort of logical frameworks at all. That's all this stuff is, tools to aid reasoning by identifying common patterns and antipatterns in thoughts and perceptions about the world around us. Anyone who treats these ideas as absolute laws rather than occasionally (frequently) useful abstractions is doing it wrong.


"That's all this stuff is, tools to aid reasoning by identifying common patterns and antipatterns in thoughts and perceptions about the world around us."

Identifying patterns is second to learning something. Logical fallacies are examples of bad arguments. You should first learn how to evaluate an argument [0] before trying to identify logical fallacies. Not only will you learn more, but there's a greater chance you will be able to put any given "model" in context. That people find things like logical fallacies useful is an indication that they don't understand the fundamentals.

[0] https://www.google.com/search?q=critical+thinking+argument+e...


This is super useful, I have a similar list but it also includes techniques and ideas

  * Dimensionality Reducing Transforms
  * Hysteresis, Feedback
  * Transform, Op, Transform
  * Orthogonalization for things that are actually dependent
  * Ratios, remove units, make things dimensionless
A big one, that helps me immensely, is that when I need to do a big/risky/complex task, is to imagine myself doing with with sped up time. Instantly creates an outline and list of tools that one will need.


This is why my estimates are often really optimistic. For most tasks I can visualize the entire piece of work in just a few moments, but I forget to adjust for how long things actually take. I can picture having to do each piece of the task, but I forget that those tasks have to exist in real time, and that I will have to think and re-evaluate as I go.

I am getting better at it, but I have to be conscious of it, lest I estimate for superman by accident!


Can you write more about "sped up time"? I'm reminded of something like this: http://lesswrong.com/lw/mnp/travel_through_time_to_increase_...


This is more metaphysical than the basic technique. If I was going to exchange the hard drive in my laptop. I would visualize the entire process, noting the questions and problems as I completed each step.

  * do I have the proper tools? Lookup special fasteners
  * I will misplace the screws, a magnet or plastic cups would help
  * It might be dirty inside, I need something to clean 
  * I might drop a screw inside, tweezers
  * Could be dark, headlamp
  * cable might not stay in place, tape
It might take 20-30 seconds to run through all the steps in ones mind, anticipating problems before they arise.


You are a physicist, aren't you?


Was.


What's transform, op, transform? Transform to another basis, then do an operation which is easier to conceptualize, then transform back?


Upvote for hysteresis.


Perhaps I need to explain my comment above "Upvote for Hysteresis", given the downvotes. It was a quick comment that might have come across as flippant, so I will explain:

I was first introduced to the concept of hysteresis as an EE undergrad.

As I went on to grad school, which was heavily economics-based, I took a course in system dynamics at mit [1]. In the intro class, the prof said: "system dynamics will change your mental model of the world" (and it did) [2]. As we went through the course, I realized that while many of the concepts in the course were econ-based, in reality were similar to my EE / mathematics concepts (capacitor=time delays, etc.) For me, systems dynamics showed me that concepts in one discipline could be applied to a completely different discipline with great effectiveness. In doing further economics work, I was immersed in many other mental models. My friends and I would use these economics-based mental models - many of which are in the OP article - to communicate in an efficient manner at school and when we were out on the town, almost like a shortcut way to speak and efficiently organize thoughts / explain a given situation.

By the time I re-entered the workforce, post-grad school - working in the venture world - I was regularly and subconsciously thinking/communicating in terms of these econ-models. But, a big aha happened when one day I heard one of the partners at my firm use the term "hysteresis", not to describe a hardware company we were looking at, but to describe a very specific management-related situation with one of the entrepreneurs we were speaking with. And I understood exactly what he meant by that term, as it applied to this management situation. Aha! It turned out that my EE world provided me with a whole toolbox of mental models - just like econ - that I can not only use to express myself, but also to be understood! (fair enough: this was the valley, where most people I dealt with were engineers). It was one of those moments when I realized what I had learnt many moons ago had direct applicability to what I was currently doing but in a completely different context.

Seeing "hysteresis" in the parent's post brought back memories to that realization and its backstory, thus my comment.

[1] this is a close enough comp for the Systems Dynamics course mentioned above - http://ocw.mit.edu/courses/sloan-school-of-management/15-871... [2] An example is "stocks and flows", a way to view things as static or dynamic. This was used effectively in a cybersecurity market map / competitive analysis many years later.


Are you visualizing all this in your mind's eye?


what do you mean by Transform, Op, Transform?


Look at performing operations in the frequency domain or diagonalizing a square matrix.


Good list! A few suggested tweaks:

Veblen goods clearly exist, but the evidence for the existence of Giffen goods is much more suspect. (Did the poor really eat more bread because the price of bread rose, or because there was an across-the-board increase in the price of all kinds of food?)

The Precautionary Principle is not just dangerous or harmful, but guaranteed suicide; as things stand right now, we are all under a death sentence. It needs to be replaced by the Proactionary Principle, which recognizes that we need to keep making progress and putting on the brakes is something that needs to be justified by evidence.

Any list that has sections for both business and programming needs some entry for the very common fallacy that you can get more done by working more hours; in reality, you get less done in a sixty-hour week than a forty-hour one. (Maybe more in the first such week, but the balance goes negative after that.)

The distinction between fixed and growth mindset is well and good as far as it goes, but when we encourage the latter, we need to beware of the fallacious version that assumes we can conjure a market into existence by our own efforts. You can't become a movie star or an astronaut no matter how hard you try, not because you lack innate talent, but because the market for those jobs is much smaller than the number of people who want to do them.


A technique I often use to test a theory is to change the inputs to be the maximum and minimum possible values and see if the model still holds true. I've found it to be incredibly useful in a few specific situations.


Or more generally, look for critical points in the model and see if it still holds. Max/min values (or odd combinations of max/min for different variables) are good candidates, as are zeroes, and anything which makes part of an equation go to zero.


I've always thought this should be a very effective way to explain a point to someone, but in practice it rarely seems to work....maybe that saying applies, something about you can't use logic to change the mind of someone that didn't use logic to arrive at their conclusion.


That might be because you're trying to use it to argue politics, where it's less applicable; hard cases often make bad law, and you can easily end up with a straw man. It works better in science and engineering.


Also a basic programmer skill. Check a normal value, limits, and if you find it some values that may lead to unexpected results like division by zero.


Right. The Laffer curve https://en.m.wikipedia.org/wiki/Laffer_curve applies this idea


Yeah, that's somewhat related to sensitivity analysis that's already in the list.

Though I think I'd agree that it's technically a different model, but related.


I think pg also wrote an essay about a mental model that I find interesting: When in doubt, it's probably not about you.

There are many events that we usually think are related to us, but actually aren't, like your boss or customer being angry is in most cases not about you but something else.

I have looked through a lot of pg's essays but didn't find it. He probably removed it just that I can't find it (/example).

If someone else finds it, please link.


When shop personel is not friendly, we tend to think that that person is not a friendly person, as a character trait. When I'm not friendly in the same situation, it's because I didn't get enough sleep, or because I had a fight with my girlfriend that morning. So in my case it's circumstantial, temporary, not my fault, in the other person's case it's the person's fault and permanent.

What you describe is our inner voice doing the same thing. (This is my personal explanation!)

Google for "inner voice doubt" and find out more!


Yes. For another search term, the fallacy in question is sometimes called the Fundamental Attribution Error.


I'm surprised he rates cost-benefit analyses as a 2 ("occasionally" used) rather than a 1 ("frequently" used). Making good decisions almost always requires taking a hard look at both the costs and the benefits. It cannot be overstated how often bad decisions are made because the parties involved simply neglected to factor in the costs (including opportunity costs).

I personally use cost-benefit analyses for every non-trivial decision in my life.


Yeah, the "X has a benefit, so we should do X" fallacy is really common, both in software development, politics, and most everywhere.


But did you consider the costs of rating it as a 1 rather than a 2?


One problem with cost-benefit analyses are the unknown-unknowns. Just because you have a cost-benefit model, it doesn't mean the model reflects the reality of your situation. There is a very real risk that much time is spent considering eventualities that can never occur, while ignoring all of the things that are actually happening.


Some commenters here are saying, "I already know this stuff." Indeed. I'd be curious if people could put out a list of "advanced" mental models. For example, Bayes' theorem is more advanced than Occam's razor.

What's clearly more advanced than Bayes' theorem, and as useful? ET Jaynes' flavor of probability theory? I'd posit the more advanced version of active listening as, "being able to perform a bunch of kinds of therapy--freudian, rogerian, family and systems etc." Of course I don't mean you go get a license for these things. I'm positing them as difficult, generally-applicable life skills. I'm not claiming these are good examples; I think HN can come up with better ones.


Thinking being a flux of information, and EE telecom theory having discovered all kinds of laws about flow of information, its no surprised that those models apply pretty well to the engineering tradeoffs of general mental models of thinking or thinking about information in other contexts.

EE control theory class IS an entire senior year class on applying a model to something (a thermostat?) which isn't terribly hard, and then modeling and measuring its performance and finally optimizing the model which is pretty hard.

Shannons law explains how good ideas, noise/distraction/bad ideas, depth of concentration or maybe total volume of information, and rate of mistakes all interrelate and how changing one (or several) will affect the others in general.

There are some interesting tradeoffs in communication filter design (analog hardware or modeled in DSP) along the lines of you can freely trade smoothness in response (group delay, ripple, latency, monotonicity kinda), accuracy in response, and complexity/cost. These tradeoffs apply to everything in the world that processes things not just filter synthesis.

There is some kind of chaos theory "thing" where as feedback mechanisms become more complicated, oscillation becomes inevitable and unpredictable. Doesn't matter if we're talking about high gain amplifier design or world economic models.

This is aside from the general engineering mental models of a good engineer can freely exchange cost, reliability/safety, and performance. In fact it being enormously easier to exchange in those rather than expand, you can pretty much see thru transparent marketing that only mentions one or some factors. This applies to all of reality not mere structural engineering.

I think the optics people could say a lot about their seemingly endless stable of aberrations. There are so many effects and interactions its surprising anything optical works at all, much less works well. Optics is almost a meta law that everything interacts with everything and constants aren't.


I'm not sure about expanding on Bayes' theorem, but some other notions from ML/stats that would be good to know are overfitting/the bias-variance tradeoff and base rates.

One instance where I've seen the former applied to society is the idea of research benchmarks getting stale from "overfitting". Even when researchers do cross-validation, we might still expect our exploration of the space of ML models to be skewed towards models that perform unusually well on well-known benchmarks. This was described in http://www.deeplearningbook.org/ with reference to ImageNet (of course).

As for the latter, pretty much every time I've seen a discussion of statistics on social or old media, 90% of the participants seem unaware that base rates matter.


I think a lot of [these ones](http://mcntyr.com/52-concepts-cognitive-toolkit/) are really useful and more advanced.


I'm just getting exposed to this line of thinking and find it fascinating. Another resource I found recently was https://www.farnamstreetblog.com/mental-models/

Disclaimer: I'm not sure if it's derivative blogspam or legitimately insightful / original


A nice metacognitive cheat sheet.

Missing a couple interrelated mental models I find very important:

- emergence: a process whereby larger entities, patterns, and regularities arise through interactions among smaller or simpler entities that themselves do not exhibit such properties

- decentralized system: a system in which lower level components operate on local information to accomplish global goals

- spontaneous order: the spontaneous emergence of order out of seeming chaos. The evolution of life on Earth, language, crystal structure, the Internet and a free market economy have all been proposed as examples of systems which evolved through spontaneous order.


Could someone give me a real example of somebody using mental models in a real world application? I just find the idea of learning and studying mental models to be distracting and confusing. Pardon my ignorance.


To be honest, I think that the process is something like:

I struggle with problems and eventually find a solution,

I encounter a name for a similar solution,

as I encounter new, similar problems, I begin to recognize "how the model works",

I forget about the model when I don't use it,

occasionally I come across a list like this one where it's fun becasue it validates the usefulness of the models I've already found and introduces me to new names for ones I already have encountered.

I don't feel that a list like this is super useful to me outside of that framework-- I wouldn't take it as a "study guide".

But I feel that framework has given me a lot of personal validation and pointers on how to better deal with problems I encounter.


I like the scenarios described in https://carcinisation.com/2015/07/23/defensive-epistemology/.

One of them is using the efficient market hypothesis (true enough for this application) to avoid being taken in by a real estate broker.


I mention it in a separate comment, but the book Inside The Box is co-authored by a psychology professor and a business consultant, and the book features many case studies of their primary mental models in business.


I mentioned elsewhere, the theme here is correcting common, default mental models that, without thinking through, lead many people to make a lot of mistakes. By recognizing the bugs in your default thought process, you can avoid many bad decisions that you would otherwise eventually regret.

Halons's Razor: A partner company just did something that makes life much more difficult for you and much easier for themselves. By all accounts it looks like they are only pretending to be "partners", but actually secretly trying to screw you over for their own gain. It's easy to get emotional and paranoid in this situation. If it's really true, you need to find a way to cut off the partnership quickly. That is serious business. But, it's more often better to default to the possibility that maybe they aren't actually fucking with you. Maybe they're just idiots. Maybe they got lazy. Maybe they didn't think through the consequences. Maybe you don't need to go into paranoid adversary mode, blindside your partners with suspicious reactions "out of no where" and fuck up a good partnership that in reality just needed better communication. Or, maybe they actually are out to get you. Just don't completely forget the more likely possibility that this is simply a mistake. It's very common that people do forget...

Zero Sum: It's easy to default to a "If they are getting richer, someone else is getting poorer" mindset. A significant number of businessy people have a "In order for me to win, YOU MUST LOSE" mindset. From both directions, this cuts off the greatly preferable win-win outcome. Recognizing this flaw in default thinking can lead you to an even better outcome for yourself than the "you defeating someone" outcome. You can instead find a way for both of of you come out ahead of the "individual victor" outcome.

Streisand Effect: You just fucked up majorly in a way that isn't obviously your fault. There are two different ways that you can try to improve your situation that will likely backfire very badly. 1) You can pretend nothing happened and hope it goes away. That is very easy. But, when the truth becomes clear, you won't just be a fuck-up, you'll be a lying bastard betrayer fuck-up, unworthy of trust or respect. 2) Even worse: You could try to shift the blame to someone else. Doing this will mostly serve to bring focus on the problem that you yourself caused. So now, even more people become strikingly aware that you are a lying bastard betrayer fuck-up who back-stabs innocent people for your own benefit. In the end, if you had simply admitted the problem and discussed how you were trying to solve it, most people would have been OK with your fuck up. But, by trying to hide it, you only made it much worse.

Framing: Sometimes mechanically analyzing a complicated situation is difficult for a human. It's easier to fall back on prior, similar references. Unfortunately, that tendency can be hijacked and abused in situations where you don't actually have much in the way of prior references. By presenting brief, false, set-up situations, an adversary can plant invented prior references into your decision process. If you are not aware enough to dismiss those plants, you will likely make a very poor value judgement. The adversary might not be a person, but instead simply a situation.

And so on...


I still think Social Psychology was one of the most useful classes I ever took in college. Sure some if it is probably dated by now but the cognitive biases theories really helped me further in life.

I remember telling some class mates to take the class and they assumed it was for an easy A and not for how useful the class would be (and I went to a GaTech a long time ago and well the social sciences were just not respected like engineering disciplines at the time).


As a fellow Georgia Tech grad, I wholeheartedly concur. The course on Political Philosophy was not only one of the most challenging classes, but also the most useful in daily life. It really changed how I think about the world--so much so that I decided to to double major in international affairs/modern languages. It's a pity that the social sciences/liberal arts aren't as respected at engineering schools. Those courses were equally enriching (if not more so) to my life as combinatorics, differential equations, and quantum mechanics.


To the development section, I would add the concept of computational context/state, caching, and queue/event loop.

This HN comment summarizes it pretty nicely "everything in an OS is either a cache or a queue" https://news.ycombinator.com/item?id=11655472

Also Overton window


I have a similar list of useful concepts. My goal so far this year was to expose myself to those concepts as often as possible. I made an app for my phone that displays the concept of the day on my home screen (right now it's the rhetorical concept of periodic sentences). I also made images for each of the concepts that I use as my chromecast backdrop. I've seen each of them dozens of times by now, mostly unconsciously.

So far, mixed results. I would like to say that I think of "Bayes Theorem" at the perfect time because I wrote it on a list, but that never happens. I guess I've benefitted from thinking about these concepts more, but that's almost impossible to measure. A list of 100 useful mental models has limited value if you can't hold all of them in memory at once and retrieve them at the right time. I'm still trying to come up with a solution for this. Unfortunately I think this might be a fundamental limitation of human learning.


Instead, try and think of a situation in the previous day when you might have applied it, and imagine applying it to a problem in the coming day.


> What am I missing?

In planning a strategy, I've found it helpful to consider Win Conditions. It forces me to think backwards from the goal, construct a dependency tree, and consider resource allocation. I first heard about it from videogames but I've also seen it in math, engineering, logistics, recipes, etc. I also pattern-match it the insight that solved the Problem Of Points [0] which motivated Probability Theory. If it were on the curated list, I'd expected to find it under "models" next to cost-benefit analysis.

[0] https://en.wikipedia.org/wiki/Problem_of_points#Pascal_and_F...


Great list, although I prefer the term "thought technology" (as coined by John Roderick) to "mental model".


So, a question back at you. Let's suppose an ontology of technological mechanisms. That is, describing technologies by how they operate. I've kicked some ideas around and come up with:

1. Process-knowledge. Arts and practical stuff, say, agriculture, construction, boatbuilding, sailing, etc.

2. Fuels & combustion, generally. Wood, plant and animal oils, charcoal, coal, petroleum, steam, otto, deisel, turbine engines.

3. Materials. Functions dependent on specific properties, and abundance of materials they're based on.

4. Power and transmission.

5. Sensing, perception, symbolic representation & manipulation.

6. Systematic knowledge. Science, geography, history.

7. Governance, management, business, & institutions.

8. Scaling and network technologies. Cities, transport, communications, computers.

9. Sinks & unintended consequences. Pollution, effluvia, systems disruption, and their management.

"Thought technology" probably falls into scientific knowledge (models) or symbolic processing.

Thoughts?

More: https://ello.co/dredmorbius/post/klsjjjzzl9plqxz-ms8nww


On-going series on mental models at http://www.safalniveshak.com/category/mental-models/


His definition of a "strawman" is incomplete. It's not simply misrepresenting someone's argument, it's misrepresenting it specifically by analogizing it falsely to something similar that is easier to attack. The example he links to is a rather exaggerated strawman. I think most people would favor the strawman explanation in Wikipedia[1]

[1] https://en.wikipedia.org/wiki/Straw_man


Many of his definitions are incomplete.


The wrong assumption about Ockams Razor is probably the cause of so many people re-inventing the wheel.

"I don't need this big framework, I can do with much less!"


How would one actually use this stuff?


This is one of those lists that would be completely unhelpful if you don't already know how to use most of what's on the list.


Agreed. It's as helpful as the other post on list of free online programming books. Needs more handholding to help people make the best use of the list.


This is the first link I found of this speech (it's called "Practical Thoughts about Practical Thought"), but its an application of combining mental models from many disciplines from the man himself:

http://mungerisms.blogspot.com/2010/04/charlie-munger-turnin...


Once you fully understand them, you may find yourself incorporating them into your thought process without explicitly calling upon it.


Yep. Most of these models are thought-out explanations of how not-thought-out reasoning frequently goes very wrong. If you understand that this is a list of bugs in your initial gut reactions, you can recognize when and why you are about to make a mistake.


This book is a very handy pocket reference that overlaps with many of the ideas mentioned here:

https://www.amazon.com/Decision-Book-Models-Strategic-Thinki...


This is the core of the book Peak: Secrets from the New Science of Expertise. It is a book of how to create a mental representation of what successful mental representations look like.

The most successful people, peak performers are those who have the best mental representations.


I would add to the list 'revealed preference'

'... an economic theory of consumption behavior which asserts that the best way to measure consumer preferences is to observe their purchasing behavior. Revealed preference theory works on the assumption that consumers have considered a set of alternatives before making a purchasing decision. Thus, given that a consumer chooses one option out of the set, this option must be the preferred option' http://www.investopedia.com/terms/r/revealed-preference.asp

In other words "observe their actions, not their words"


Are these "mental models" or just a bunch of clichés / pithy aphorisms? To me, a mental model would be something more like "visualizing possible state transitions as a directed graph" or something like that.


Nice. They got Hick's law...that's one of my favorites, not so much in development, but sports. I train Brazilian jiu-jitsu, and I find substantial improvement in my reaction time by having only 2-3 well-worn options at my disposal (even 3 starts to feel crowded) in an given position, rather than a multitude of counters/attacks. When someone is trying to strangle you, go left or right is often a better choice than let's-check-the-mental-database-for-the-ultimate-move.


The mental model from economics that is widely misinterpreted is comparative advantage. Most think it means you/a country etc should specialize in that which you are best at. And then free trade will work to your advantage. But it actually means that even if you are worse at producing products A and B than another country, if your ratio of A/B is better than the other country, it would be good for you to produce A and trade it to the other country for B etc. I


A couple more:

Evolution

> Frequency-dependent selection: fitness of a phenotype depends on its frequency relative to other phenotypes

> Evolutionarily stable strategy (ESS) is a strategy which, if adopted by a population in a given environment, cannot be invaded by any alternative strategy that is initially rare. It is relevant in game theory, behavioural ecology, and evolutionary psychology. Related to Nash Equilibrium and the Prisoners dilemma.

Economics

> Debasement (gold coins): lowering the intrinsic value by diluting it with an inferior metal.


Quite a few of these "mental models" are just a definition of terminology like "botnet". Come to think of it, the complete list is just definitions..


I would say "Divide and Conquer" should be a 0... it is that useful and it can be applied to many many different categories.

So many things seem intractable and formidable in complexity yet once these things are broken down into pieces things become clear. The Asana CEO once talked about this. Breaking things out provides clarity and once you have clarity productivity is massively increased.


If you enjoy these sort of summaries, I encourage you to check out the book "Seeking Wisdom" by Peter Bevelin https://www.amazon.com/Seeking-Wisdom-Darwin-Munger-3rd/dp/1...


There is also an app from the apple app store that has most of these mental models in book form: https://itunes.apple.com/us/book/think-mental-models/id61236...


I thoroughly enjoyed the book Inside The Box, which presents four mental models for creative problem solving. The core idea that creating rules can help creativity is a pattern toward which I think most technical people (including myself) feel averse, but actually can be beneficial when studied with an open mind.


I recurrently use: Everything is a... [1]

Even when this model doesn't explain 100% of occurrences is great as a starting point of view to understand the main pattern of a complex system.

[1] - http://c2.com/cgi/wiki?EverythingIsa


Perfect, but how do you use these models?

Are you supposed to know all 100's of them by heart and then, in the middle of conversation, go: "Ah, but X principle says Y, therefore we will go with Z option". Is it? Am I missing something?

I mean, I'd love to use this but I don't have enough brain cells for all of those :)


I'd add Amdahl's Law [1], which is about the relationship between adding resources for executing a task, and the speed-up that delivers.

[1] https://en.wikipedia.org/wiki/Amdahl%27s_law


It's an interesting list. Though I'm a bit baffled at why he has Power-law as a "1" (comes up frequently) and Heavy-tailed distribution as a "3" (rarely comes up). A power law is a heavy-tailed distribution!


Very underwhelming, I'm actually quite surprised that most people seem to find this useful and interesting. I mean, normal distribution, Moore's law, minimum viable product, paradox of choice... that's pretty basic stuff.


under competing, I'd add OODA loops ~ https://en.wikipedia.org/wiki/OODA_loop


Specifically the competitive aspect of getting inside an adversary's OODA loop


Along with the reference to Arrow's Impossibility Theorem, I'd want a reference to the fact that voting can be done in ways other than ranking, e.g. approval or score voting.

Overall, a superb list.


Nice list! I really miss this one:

https://en.wikipedia.org/wiki/Reductio_ad_absurdum



Have a look at the Single Responsibility and High Cohesion principles, which i think should be included in development/design


Is Gabriel Weinberg related to Gerald Weinberg? No right? I've been wondering this for some time now.


An article that goes straight to the point. I like it!


Inflation is a mental model? Peak oil? Botnet?


I think these can be defended as mental models, even though the article doesn't do a good job of it.

We're familiar with inflation in the financial sense. But then there is also grade inflation. There is inflation of superlatives in our language, e.g. "great" and "awesome". Once we see a few examples we realise that inflation is a more general concept, and a useful one to use in explaining a lot of situations.

Same for peak oil, I think. Not sure about botnet.


I went back to look at the article again and now see that he says almost exactly what you just said. Was that there before?


Hmm.. the entry on inflation hasn't changed (doesn't say anything like what I said) but I see notes that mention grade inflation. I don't know whether that was there first time round.


Nothing ground breaking here - I imagine most readers here already use most of the author's models - but this is a nice comprehensive list, which I have not seen before.


Curation is a valuable activity. Like you, I'm happy to see this list.


Does anyone have a pointer to a list of these lists? Would be interesting to know what models people use on a day-to-day basis, like usesthis.com for the brain.


I've started keeping such a list actually: [redacted]


Ah, that's the first actual pinboard page I've seen -- I've been looking at/for archival tools for some time, currently Pocket.


I'd recommend it. I was grandfathered into the free plan but $11/year seems fair.

Regardless of what you settle on I'd look for the equivalent of http://www.packal.org/workflow/alfred-pinboard for whatever service and platform you use. Being able to instantly search through all fields of all items in your archive is pretty great and has changed the way I work.


Pocket has comprehensive search, which is pretty slick. The tagging feature leaves much to be desired, though it's also far better than Readability.


Thank you


I guess what you described goes by a well-known term called 'critical thinking'?


Strictly no, though what's offered in part complements, in part substitutes, for critical thinking. Some of these are components of critical thinking (or describe), much isn't.

This is a set of both guidelines and heuristics, a set of patterns, if you will, which can be applied to situations or analyses. Some give you a fast route to a simple answer (Occam's Razor), some give pause before accepting what appear to be well-founded results (Simpson's Paradox -- I've encountered that before but had largely forgotten it). Some are simply shortcuts in estimation (order-of-magnitude, and log-based math -- multiplication and division become addition and subtraction).

Critical thinking has varying definitions, but I'd generally describe it as more structured and procedural than what's offered by @wegge. See: https://en.m.wikipedia.org/wiki/Critical_thinking


wow, this is really useful. thank you.


aka buzz words to make you sound more intelligent.


Interestingly, that's about 75% of my 2-year MBA.

Sure, it's far different doing daily training to get those concepts ingrained in your mind so you don't have to actively think about them, but it's nice to see them listed like this.

Here are a couple more:

- Overconfidence bias: we usually think we're better than the average on something we know how to do (driving) and worse than the average in something we don't (juggling), even if almost nobody knows juggling and everyone knows how to drive

- No alpha (aka can't beat the market): you can only consistently beat the market if you're far better at financial analysis than a lot of people who do it every day all day. So don't bother trying.

- Value chain vs. profits: you'll find that most of the excess profits in the value chain of a product will be concentrated in the link that has the least competition

- Non-linearity of utility functions: the utility of item n of something is smaller than item n-1. Also, the utility of losing $1 is smaller than (1/1000) utility of losing 1000. This explains insurance and lotteries: using linear utility function, both have a negative payout, but they make sense when the utility function isn't linear

- Bullwhip effect in supply chain: a small variation in one link of the supply chain can cause massive impacts further up or down as those responsible for each link overreact to the variation (also explains a lot of traffic jams)

- Little's law: in supply chain (and a lot of other fields): number of units in a system = arrival rate * time in the system

I'll add more as I think about them.


- No alpha (aka can't beat the market): you can only consistently beat the market if you're far better at financial analysis than a lot of people who do it every day all day. So don't bother trying.

I'd argue that you can have alpha if you are better informed than everybody else. Financial analysis is the craft that comes after that. So yes, if all you have is financial analysis don't bother trying to beat the market. But if you have some unique insight, some information that the market doesn't have or doesn't see, then with some added financial analysis on top you do have an advantage that you can use to generate alpha.


You only have to be better informed on a specific subject too. If for example your job involves buying large quantities of two brands of bananas, and you notice that one brand is consistently worse than the other. You now have some unique insight that you could use to speculate on the banana market.


Sure, but insider trading is illegal. So let's say you can only rely on publicly available info, which everyone else has.

Then "unique insight" is financial analysis, plus macroeconomic analysis, etc.

In other words, if everyone have access to the same info, you can only consistently do better than the market by consistently having better analysis than the market. Everyone is seeing the same info, so you don't do better by seeing some piece of info others are not seeing, but by using different weights in your analysis than the market is using.

And even in those cases, the market might stay irrational longer than you can stay solvent.


You work with an oversimplifying model of reality.

Say you've worked in a specific industry for a long time and you know all the players. You know where the technology is, what the challenges are and where the tech is going. You know how key companies are managed, you have an idea about their goals and strategies. You know who's best positioned for what's coming. This is just general knowledge that you've acquired through your job over the years. Now say you've made enough money and retire. Because you know a thing or two about your industry you decide to buy or sell some stock. Can this be called insider trading? Perhaps. Is it illegal? Most likely not. Can you derive alpha from it? Hell yea.


And how many others have done exactly the same and are writing stock picks or selling industry consulting?

Again, unless you are confident that you're a better expert then everyone else, you shouldn't think you will beat the market with just public info.

Now if you're saying that you keep getting updates insider information from the entire industry WHILE you're trading, well, then you're back to the "insider info" part.


Why aren't you super rich, then? If that's really true, then go do that thing you said.


Straw man


I found it interesting that he didn't outline any models for resolving internal, emotional issues (e.g. relating to spouses, dissolving internal fears/debates, finding the root of personal issues).

I've summaries some of these strategies on my Github account; I call it an "emotional framework".

https://github.com/aantix/emotional_framework


That's pretty cool. Active listening and a lot of the negotiation techniques end up helping as well.


My only nitpick is on No Alpha, you can't beat the market on securities that everyone is tracking (mid to large caps). Companies with low institutional ownership (usually small or micro cap) are not well-researched so there is still money to be made, provided you obviously do the rigorous work to analyze them. It's the basis for how Warren Buffett got his initial capital before he turned to GEICO and created the megacorp Berkshire Hathaway.


The value/supply chain aphorisms are pretty interesting. Do you know of any books you could recommend to learn more about what can happen in supply chains/logistics and how to accommodate them?


Porter's Competitive advantage is generally considered the bible in this area (value chain and competition).

On supply chain, just look for operations research, a lot of it came from the military and industrial research, but can be applied everywhere, say, dimensioning servers for concurrent users.

Also, keep in mind that supply and value chains are very different. Let's take online mobile ads:

- Supply chain: advertiser -> agency -> ad exchange -> website -> viewer

- Value chain: website -> ad platform -> hosting service -> internet connection -> device -> OS -> browser -> viewer -> advertiser

If someone controls the entirety of any of the links, they will hold an excessive market power, and will be able to extract excess profits. Let's say for example iPhones were the only smartphones in existence, Apple would be able to control what through them (unless the government took action in some way), extracting disproportionate profits.

Now if there are 1000 smartphone manufacturers, competition between them would lower prices, pretty much killing excess profits from that link, which could be captured by other links in the chain.


I can't answer the supply chain stuff specifically but if you pick up a book on Queuing Theory and skim the results and theorems for simple queues and queueing networks, you'll make these conclusions yourself.

I'd recommend one but it's been ages since I was an undergrad and I no longer remember specifics.


If the 29 minute read time is intimidating, consider this link: https://www.farnamstreetblog.com/mental-models/

All the information, easier to read quickly.


It was a nine minute read for me. Medium's estimates are always off and especially so when dealing with list-type posts.

Instead of relying on how long some website says something will take to read, it's usually a better idea to scroll through once just doing scanning at the high level to get an idea of length and then read it if you want to.


"Spamming" is a mental model? Mmmmmkay.


Besides its original meaning (repeated uninvited bombardment with information packages) I can think of only one alternative use of the pattern: games.

For example, rocket / grenade / arrow spam in TF2, or Lucio / Hanzo / Symmetra projectile spam in Overwatch. In this context, spamming is just firing in the general direction of the enemy, hoping that some of the rounds will hit.

Maybe this generalizes to repeated application of some cheap technique that has a low probability of success, where the low chances of success are compensated by the low amount of effort per 'shot' required -- but I can't think of any more examples.


That's pretty good. Gotta applaud you.


TL;DR; What you learn in an Economics degree.


Such cynical words, besides depriving the world of a much needed listicle, will also get us downvoted. Please don't offer such awkward comments which might cause people to pause and think. Now back to my facebook feed..


We detached this comment from https://news.ycombinator.com/item?id=12040892 and marked it off-topic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: