Hacker News new | past | comments | ask | show | jobs | submit login
Skin in the Game as a Required Heuristic for Acting Under Uncertainty (ssrn.com)
112 points by breck on July 29, 2013 | hide | past | favorite | 62 comments



I wrote a blog article about requiring skin-in-the-game the other day. My food for thought is:

Small transactional costs stop things from existing at the fetal/growth stage. Evolution destroys them long before the long-term benefits demonstrate themselves as it supports short-term adaptive traits over long-term adaptive traits. If you require skin-in-the-game transactional costs then your transactions will happen far less often, and in the short-term you will get out-competed for resources by those that protect themselves with top-down regulations.

It is often difficult for third-parties to evaluate the authenticity of a cost or regulation, which is why institutional costs exist - they factor out hundreds of checks of hundreds of transactions into a single check of the institution that regulates hundreds of transactions (which itself is often proxied by the social proofing of this institution.) This lowers the transaction cost which causes high growth.

Fetal is really a good analogy here. The womb exists for good reason; scaffolds exist for good reason; decorum on first dates exists for good reason. Without these things early transactional costs defeat growth.

In short, skin-in-the-game evolves out of artificial systems as it is bad for early-growth. However this does not mean that it should; in well-designed/natural systems the entity first exists in a protective bubble during its gestation period and then slowly has its skin placed in the game.


I don't think your analogies apply. We do not demand that a fetus bear the cost of its bad decisions, because it is incapable of making decisions. (But we might start exposing a child to consequences of its bad actions, in proportion, quite early.)

Similarly, a building under construction is incapable of advising someone to buy stocks. And first dates do not usually result in land wars in Afghanistan.

Can you give an example of a process or profession which can be said to willfully expose others to risk, which does deserve this sort of protection when it's just starting out? It seems to me that such responsibilities are nearly the definition of maturity.


I think that's the wrong level of abstraction.

Everything has a cost of existence.

I do not think that it is useful to discuss the "cost of acting under uncertainty" when you can discuss the "cost of existence". The former is just a special case of the latter - the existence of conscious actions. When understood in this way, the foetus has a cost of existence, the analogy holds and as it grows older we have more and more cases of its protection against the outside world through its location, tribe, mother, etc (all later forms of gestation period) as we pass through the traditional coming-of-age story arc.

Skin-in-the-game hurts fragility but helps resilience (individual or systemic) become antifragility. At growth stages, resilience is the exception to fragility. At maturity, fragility is the exception to resilience.

The concrete cases that validate this level of abstraction are: (a) entrepreneurs building profitable companies in which many need angel investing in order to survive their operational costs [1]; similarly (b) early-stage deals are extremely fragile to transactional costs, so placing a skin-in-the-game cost on one deal while not on another is extremely likely to cause the other deal to be preferable.

I agree with both yourself and Taleb that in cases in which an actor has the potential to be harmful to others they should be responsible for their actions and have skin-in-the-game. However my point is that no entity is just born like this, they have to slowly place their skin-in-the-game. Lawmakers have the tendency to add regulatory costs to a system that through the process that I've just described stop new entrants from joining and create the perfect opportunity for monopolies to form (which eventually learn to manipulate the presentation of their up-sides and down-sides while avoiding the regulatory down-sides anyway.)

[1] Entrepreneurs are ultra-fragile and therefore the whole startup system that emerges around them helps to offset the operational risks at early stages. Helping individuals to bear risk is how systemic antifragility is grown.


>And first dates do not usually result in land wars in Afghanistan.

Keyword: usually.


Goddamn butterflies


>In short, skin-in-the-game evolves out of artificial systems

"Skin-in-the-____"


It doesn't mention any negatives of requiring skin in the game. The most obvious is adverse selection: risk-averse people won't enter a profession where mistakes are severely punished, so you get only risk-seekers.


Well, the degree of risk that someone should take on is debatable. The point of the article was - as I understand it - that we should be suspicious of people who just won't put their money where their mouth is:

"The skin in the game heuristic is best viewed as a rule of thumb that places a pragmatic constraint on normative theories. Whatever the best moral theory (consequentialism, deontology, contractualism, virtue ethics, particularism etc.) or political ideology (socialism, capitalism, libertarianism) might be, the 'rule' tells us that we should be suspicious of people who appeal to it to justify actions that pass the cost of any risk-taking to another party whilst keeping the benefits for themselves."

Designers who wouldn't drive a car they designed. Doctors who wouldn't get treated in their own hospitals. That sort of thing.

The problem, to my mind, is how you'd motivate that sort of risk taking for people who are more powerful than those they're advising, or have different goals, or are in positions where they can't prove that they've skin in the game. I think most of us would, as a matter of course, get people to put their money where their mouth is if we could practically pin people down to it on a day to day basis. But, in reality, I might not choose to be treated in the hospital I work at because I can afford better care, or because I think some of the people there have it in for me, even if I'm an excellent doctor. I might not choose to drive a car I designed because I can afford a better one, or have different tastes.

etc etc.

And that's even in the case that I can show that I've skin in the game. I've worked jobs before where having any interests in the problem that might tempt you to fudge the results was grounds for not being employed.

It's very difficult to apply such a heuristic when I'm not necessarily playing the same game as you for the same prizes.


It's kinda interesting that the modern corporation was explicitly designed to shield the decision-makers from risk. Without it, many of the large industrial & trade breakthroughs of the enlightenment wouldn't have been possible, as nobody would've been willing to take the personal risk of venturing to the Indies or setting up a cotton mill when the downside of failure was perpetual indentured servitude.

I don't think it's bad to require some skin in the game, but require too much and many risks that are quite beneficial to society as a whole just won't happen.


Partially disagree. The original necessity of corporations was to distribute risk, not to remove it. There was the same total risk, but born across a wider number of people. The financial risk was unchanged, but the consequences were made less catastrophic. Equivalently, a corporation aggregates enough resources for the risk to become rational.

By contrast, one of the functions of modern corporations is that culpability does not pass-through to the decision-makers. This is an actual removal of risk, not re-distribution, as there are consequences that a corporation is not capable of facing.


I was thinking about this the other day. Just as with joint-stock ownership, legally limiting financial liability redistributes risk (by transferring it to creditors), but doesn't reduce it in net.

Creditors tend to be established firms and bankrupt companies tend to be new, small enterprises, so it seems that limited liability promotes socially beneficial risk-taking by offloading bad outcomes on those who are best able to absorb them. ... "From each according to his ability, ..."


> "nobody would've been willing to take the personal risk of venturing to the Indies"

Made me think of this: http://www.youtube.com/watch?v=pM-igYjn6E4

The corporation sheilds from financial risk, thus enabeling an actual risk. When latter is missing the problem of lack of skin may appear.


Well, it caps risk, but doesn't remove it. Whomever owns the corporation presumably put some equity into it, right? That's the amount they have at risk.


The cap is about as low as it could be. The investors put up capital which is committed to the ordinary operation of the business. Buy some materials, hire some workmen, make some stuff. Perhaps none of it sells even at 90% off. The capital is gone. BUT the investors were never on the hook for any extra.

One could imagine company law creating a same-again limitation on liability. If you invest a million dollars, you have two reasons to watch the management you appoint, first they could lose your investment. Second if they screw up and do a lot of damage the investors are on the hook for the same again; potentially another million dollars.

It is quite a tricky proposal to analyze. Consider the common practice of supplying goods on 30 days credit. It is slightly risky. The purchaser might go bust without paying. Under current once-only limitation of liability it is common for suppliers to lose out, which can cause a ripple effect as one company goes bust, causes its suppliers to fail causing their suppliers to fail...

Under same-again limited liability the ripples go in a different direction as investors liquidate assets to meet "second" liabilities.


I'd be interested to hear examples of professions where you think we want risk-averse people involved in inherently risky activities without skin in the game. Are these activities beneficial to society?


Medicine.

Startup founders.

Car designers.

Airline pilots.

Of course, all these guys have some skin in the game now. I'd argue that in all these professions, artificially increasing the penalties for failure wouldn't make the world better.


No one is advocating for increasing those penalties. Taleb is just stating that there be some penalties, which kinda makes sense:

You feel safer if the pilot is in the airplane with you, and has his life at risk as much as yours, than e.g. if he was just remote-controlling the plane from his office


OTOH, you feel safer if the 911 dispatcher is sitting in a far-away call-center and not freaking out because she's at the scene with you, and you feel safer if the surgeon is a dispassionate expert at their field and not the patient's mother. You'd feel better being represented by a lawyer who is not on trial with you than one who is.

I think a lot of this depends upon whether the task calls for a rational or emotional response. For tasks that require a rational response, having "skin in the game" can cloud your judgment and make you perform worse than if you were a dispassionate observer. For tasks that require an emotional investment, you want the person's risks and incentives to be aligned with yours. (For example, you want a product designer to actually use the products they design, and you want a teacher to care about your kids as much as you care about them.)


You didn't really respond to OP's point. They said some skin in the game is sufficient.

A surgeon faces large legal risks for a botched surgery. A lawyer is likewise legally responsible for his actions. Doctors and lawyers are two of the professions with the most skin in the game!

I don't know enough about 9/11 dispatchers to say if they face liability.


Are you willing to make the inverse (and I think more interesting) argument, that reducing their respective skin in the game would indeed make the world better?


Not generally. US society has the penalties for failure tuned about right. Each profession's rules are the result of plenty of public policy debate that's much more nuanced than Taleb's.


How exactly has US society tuned the penalty for pilots that crash (fatally) with their airplanes?


I think you are confused what skin in the game means. Pilotes and startup founders are among the professions with most skin in the game. If they crash their plane/company they die/go brankrupt (or at least lose time and money) with it. That pilotes die in case of a plane crash is probably the main reason why air travel is so safe.

On the other hand doctors have, compared to their patients, no skin in the game. If the patients health gets worse or the patient dies the doctor stays unharmed. This is partially the reason for overtreatment in medicine (the other one being the asymmetry between the rewards for positive/negative effects of treatment vs. no-treatment.)


> That pilotes die in case of a plane crash is probably the main reason why air travel is so safe.

Then what about driving ? By the same logic, car accidents should be much less. Flights are much safer due to the amount of research done in air-travel safety, in general, and also after every accident. And also because in roads you have too many cars interacting together at the same time, where any one person's small mistake can cause an accident, which is not the case with aeroplanes.


"Flights are much safer due to the amount of research done in air-travel safety, in general, and also after every accident."

Yes, of course. But why? Why is the amount of safety research so much higher after a plane crashes compared to when a patient dies? Why are simple checklists common practice for pilotes but not for doctors even though they could save many lives [1].

As for car travel. It seems to be inherently more dangerous for the reasons you mention. But lets introduce a principal-agent problem [2] for car driving. Assume that for example taxi drivers would steer the vehicle from a save place say as a drone. Would you want to drive with such a taxi?

[1] Gigerenzer: Risk Savvy - How to make good decisions. [2] https://en.wikipedia.org/wiki/Principal–agent_problem


If you're wondering what medicine would become without skin in the game, look no further that chiropractors and the "medical arts clinic"s popping up in every corner.


Medicine :-)


Doctors do have "skin in the game".

They face rigorous scrutiny for their medical actions.


well, sortta, and mostly in the US where malpractice lawsuits are an issue


But doesn't doctors insure themselves against such suits, so the end result is just higher prices?


Excessive punishment can be a consequence of artificially limiting entry. So if the limits are removed or relaxed, we might hope that the risk/benefit converges to an ideal.


It's not immediately obvious there's anything new or interesting in there: how is this heuristic different to a classic Hansonian bet [1]?

One potential advantage is that having "skin in the game" has a more positive connotation than betting on outcomes, to the general public at least. Regardless, Hanson at least deserves a mention.

From a stylistic point of view I'm not a big fan of the appeals to authority (e.g., "the ancients were fully aware" ) either.

From a startup perspective it's worth mentioning that mentorship or advice is also generally more confusing and less useful when the mentor lacks "skin in the game". Hence mentor "whiplash".

[1] http://www.overcomingbias.com/2013/07/bets-argue.html


Good points. However, I don't think that "the ancients were fully aware" is an appeal to authority. I think the idea is ancient people tended to use things that were successful; therefore if the ancients used something, we should boost our confidence in it (relative to the baseline).


You're partially correct. I'd say that we tend to adopt the successful cultural adaptations of our ancestors and discard the others.

In other words, when comparing an ancient society to one descended from it, I'd expect the "successful" adaptations of the ancestor culture to be disproportionately present in the descendant. The converse need not be true.

In Taleb's example Taleb of the builder, those ancient heuristics became our tort law (and will be a part of our reputation networks in the future). He doesn't mention all the cultural adaptations we've since dropped...

So it's obvious at best, and an appeal to authority at worst.


The comments here seem to miss Taleb's central idea that harm is non-binary. You need to consider not just whether something occurs, but how much impact it has.

I don't care whether a bank earns a financial gain or a loss. I care how big the gain or loss is. One hundred years of small profits can be wiped out in one bad quarter.

I do not believe Taleb is proposing punishing all risk taking. He's advocated aggressively taking risks where the downside is known.

The problem we face is that many are now in a position to get a limited, positive upside if things go well, but they face no downside. And the potential harm in such situations has no upper bound.

Skin in the game ought to be proportional to harm. Many comments mention medicine as a profession that should not have skin in the game.

Rubbish. Doctors are very liable if things go wrong. But it depends how wrong. We don't hold doctors accountable for small errors, or random errors.

We hold them accountable for big errors. The bigger the error, the worse the punishment.


There's an issue in convincing people that harm is completely continuous. If there is a perception that there is some culpability in performing some action, regardless of how small, then it is easy to exaggerate it to a significant quantity in their mind. This would make society increasingly hostile and litigious for even small gestures like offering to watch their stuff while they visit the restroom.


Such a simple heuristic, but I think with far-reaching consequences. Last week, NNT posted a link to a news article with talked about a new law in an Indian state that required headmasters to taste the food that was served in the school cafeteria. This was initiated after a bad case of contaminated food and deaths resulting from it.


I gave the paper a quick skim. My thoughts:

As they point out in the paper, there are some cases where risk is intentionally removed, such as through bankruptcy for businesses to protect entrepreneurs. I think that can have positive effects.

I'm not fully convinced that "academic economists, quantitative modellers, and policy wonks"..."have no disincentive and are never penalized by their errors." I believe their reputation is at stake. The authors may mean they have no financial risk, but their reputation is tied to their future earnings. I'm not sure that's a bad thing as long as it's considered.

Overall I'd say there's nothing surprising in this paper, nor any specific proposals (as I hoped) but mostly a philosophical argument made in reaction to the current financial and political environment. I do agree with their premise. As a political view, I think "Skin in the Game" would make a pretty good slogan.

It would be interesting to implement distributed trust networks around this concept. If there are clear failure/success signals, you could have nodes present a bond which is destroyed on success and paid out on failure. Automated damages collection, I suppose.


His point is that there's rarely a reputational hit for being wrong in those professions. There should be, but there isn't.

Examples:

Thomas Friedman supported the Iraq war. http://www.youtube.com/watch?v=ZwFaSpca_3Q

Joseph Stiglitze and Peter Orzag predicted Fannie Mae and Freddie Mac faced near zero risk.

Stiglitz later claimed credit for predicting the financial crisis(!) and Orzag had a prominent position in Obama's administration

http://www.pierrelemieux.org/stiglitzrisk.pdf

What commentator can you think of that's lost his job for a bad prediction?

Pundit's have perverse incentives. They can point to correct predictions to boost their career, and they are penalized very little for wrong ones.

They have an incentive to make many predictions, and retrospectively choose to highlight only those that panned out.


To be fair, Fannie Mae changed its practices, engaging in the risky "financial innovations" that caused problems for a lot financial institutions, after Stiglitz' original study of their exposure to risk.

It is not as if Stiglitz was in control of the institution or encouraged that they engage in these more risky behaviors.

It is more like a doctor performing a check-up on a patient, which represents a mere snapshot in time, saying they're in good health. And then the patient decides it's okay to start smoking, eating fastfood for every meal, and giving up exercise. Are you going to blame the doctor for not warning this could happen?

I would say the stronger lesson is that continued checks and oversight are critical to the health of any institution or business. Though I do also agree and think people who make sloppy predictions should also be held accountable for their behavior.


As they point out in the paper, there are some cases where risk is intentionally removed, such as through bankruptcy for businesses to protect entrepreneurs. I think that can have positive effects.

Perhaps the point is that the risk is not removed in those cases, but reduced. Entrepreneurs clearly do have skin in the game, and bad decisions will affect them. At the same time, having a "safety net" that ensures bad decisions will not completely destroy an entrepreneur's life is clearly beneficial because it allows more people to try their hand at it.


Good point.


Interesting read. I liked the reference to Ralph Nadar... Those who vote in favor of war should be required to enlist in a draft.

Though I don't know how serious I can take this when the guy uses the word 'wonks' in reference to those he disagrees with.

Also I think that the lack of 'skin in the game' encourages risk, which counteracts a natural tendency for large groups to be more conservative (look at big business vs small). If you made decision makers have skin in the game, you'd have even less innovation and new ideas in large institutions. This is taking into account factors beyond his thesis, but leads to interesting thoughts when it comes to practical application of his ideas.


I could be wrong, but I believe the term "policy wonk" is not necessarily negative... I assumed it simply referred to one who is well-versed in policy debates and data.

For example: http://www.washingtonpost.com/blogs/wonkblog/


This is correct.

source: extensive reading of political blogs + newspapers circa 2006-2008.


...the guy uses the word 'wonks' in reference to those he disagrees with.

This is a pretty mild epithet coming from NNT. He has described the same group in much less complimentary terms.


> Those who vote in favor of war should be required to enlist in a draft.

The same as those people that were counting bananas or such after the Fukushima incident should have been given incentives (including financial ones) to move close to the affected area, with their families and kids. You either trust your words and computations or you're not.


the skin in the game heuristic relates directly to the virtue of being such that the system will not only survive uncertainty, randomness, and volatility but will actually benefit from it.

Doesn't Taleb mean that the 'skin in the game' heuristic will prevent uncertainty, randomness and volatility in the system rather than bring benefit? Within examples such as the 07-08 financial crisis, which he must be referring to indirectly, 'skin in the game' would have meant less risk-taking, therefore reducing volatility. But I can't see how the system would have actually benefited from volatility.


Taleb defines a system as "anti-fragile" if it benefits from volatility. His previous works build up a mathematical framework for distinguishing between things which are harmed by fragility, things which are neutral to fragility, and things which are benefitted from fragility, which are fragile, robust, and anti-fragile, respectively.


What's so mathematical about it? It has lots of pretty graphs and charts, but as far I could tell, it's just philosophy masquerading as mathematics.


I was a bit confused by this too, but perhaps he means that in an ideal system of universal skin-in-the-game, volatility and chaos will serve to optimize the system by triggering failure for those who took risks unwisely and, therefore, providing negative/corrective feedback to the system overall.

In other words, if you want a system to evolve robustness, you probably want to expose it to a diverse array of inputs.


I haven't read the paper yet but I remember reading someplace that the fundamental of economics is that if you introduce costs into a system, you should bear them.


So what happens to the authors of this paper if we take their advice and it goes badly?


Nothing of course! It's one of the wonders of being an economist ;)

(sarcasm aside, I do think there should be stronger professional repercussions in economics for advocating idiotic policies. It's one thing to found a startup and fail, and then start again; it's another to cause a deep recession in a country and then say "oh well, let's try again")


The author follows this rule: only recommend a strategy that is currently used.

If the advice goes badly, the author also suffers. Such recommendations are more reliable than those from authors without 'skin in the game'.


Nothing educates you like an actual personal loss. If its always other people's money, people will figure out how to game it. And those okay with gaming it will soon crowd out the rest, as they can work for less because they get paid extra for gaming it.

If the banks writing bum mortgages had to take the loss instead selling all the losers to Fannie and Freddie, many bum loans would have never been made.


Does this really solve the problem?

The execs at Lehman and Bear had skin in the game. Sure some of the senior-most folks got away with small fortunes, but everyone last money. Some people lost their life savings.

Conceptually this makes sense (similar to requiring all derivatives trades to have margin requirements to limit leverage) but does it really fix what went wrong?


My favorite example of skin in the game is riding a bicycle.

Compared to driving a car with a ton of metal and five airbags around you, riding a bike means that you literally have skin in the game.

On the other hand driving a car you basically have no skin in the game. You jump a red light or drive too fast and as long as the cops don't catch you it's alright.


In some sense, the same ideas he has always written about, but again expressed in a new way. I like it.


Investors need to act as if they could stomach a total loss on the investment, founders need to act as if they can't survive failure on the venture.

Now imagine that these two roles usually exist in the same person, to varying degree.


This is why I believe more in devops approach. When ops and dev are separate it's both developers acting in an environment with little useful feedback and ops always able to blame developers.


I'm re-watching The Wire at the moment, which I'm enjoying greatly, and one of the best things about it is watching the lengths that the characters go to in order to avoid being personally linked with criminal operations under their control. They benefit when the organization benefits, but aim to avoid the downsides when their staff's activities are exposed.

As long as there are areas with very high reward (investment banking, drug dealing, human trafficking, startups), and enterprising individuals with the connections to organize them (either through experience and leadership progression, or by using opportunities to undermine former leaders / acquire resources), people will rise to these opportunities.

Once an operation reaches a certain size however, many leaders become disconnected - intentionally or otherwise - from the day-to-day business of their organization.

Given the typical attitudes on HN, I'd expect that most founders here would be genuinely mortified to find out if their software had caused real-life problems - most would take a lot of care to ensure good user experience and correct results; these make good business sense too. However I think many would eventually aim to become distanced from the business too - the dream of reliable passive income.

Meanwhile, we imagine - and it's not hard to imagine - that many bank executives, criminal leaders, and others - actively enjoy their life in distance and immunity while feeling little remorse for the damage they do, and certainly not taking any intrinsic risk for it.

We expect that the rule of law will deal with these problems when they occur - that's what optimistic films and positive fiction tell us - but the reality is that a good-enough combination of wealth, influence, leverage over others, and wits can let people stay at arms-length from (but in control of) nefarious affairs even if they are aware that they are causing harm.

In some ways I think this expresses itself even in the trend for honest businesses to avoid liability where possible -- if we take liability for what we do, we also have to do the best we can and take genuine care of our customers.

Somehow a fear of litigation, genuine exploitation of litigation (c.f. ambulance-chasers), and fear of decreased business efficiency have reached the level where companies do prefer to distance themselves. Often it is via lacklustre or even laughable attempts, such as using oft-ignored in-store signage, disclaimers, or automated customer interactions.

Retreating from each other for fear of risk isn't healthy as a general trend, and neither is our inability to reach and re-integrate those who expressly intend to maintain their distance to avoid risk to themselves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: