Hacker News new | past | comments | ask | show | jobs | submit login
Ways people trying to do good accidentally do harm instead and how to avoid them (80000hours.org)
322 points by robertwiblin on Oct 18, 2018 | hide | past | favorite | 144 comments



This is very interesting!

But I've seen a lot of push back against this kind of thinking in some political groups. For example saying that you should follow norms of niceness when promoting a cause - people call that tone policing.

https://en.wikipedia.org/wiki/Tone_policing

You could say you should criticise the people who are tone policing rather than the people who are worked up about a cause that personally effects them so aren't managing to be very polite, but the problem is not the people who try to tone police you, it's the people who don't like your tone and don't criticise you for it - they silently just ignore your cause or vote against it.


People who criticize tone police are technically right that the content of an argument is what should matter, but that smacks of ignorance of how humans engage in conversation.

If someone is screaming at me and ending every sentence with, "you fucking moron!", there is approximately a 0% chance I'm going to take them seriously and I'm going to assume emotional outbursts will be sent in return to any responses I care to make.

How many pro-choice people do you think are persuaded by the people screaming "murderer" outside of Planned Parenthood?

People have a limited amount of time to deal with all of the issues in the world. If you can't explain your argument in a calm, rational manner, people who are on the fence will shut you out.

See what Howard Dean's screaming did to his campaign.


   How many pro-choice people do 
   you think are persuaded by
Symmetrically, how many pro-life people do you think are persuaded by the people screaming "woman-hater/sexist/misogynist" outside of Planned Parenthood?

   If you can't explain your 
   argument in a calm, rational 
   manner,
You overestimate the efficacy of explaining in a calm, rational manner. As a university teacher with > 10 years experience, I can tell you with extreme, and reliably reproducible experience, that explaining in a calm, rational manner is also typically overrated. Year in, year out, I tell my students to test their coureswork before submission to ensure it compiles, year in, year out, I tell my students not to cheat etc etc ... and year in, year out, a large number of students ignore my calm, well-argued and perfectly rational advice.

What screaming (or the university equivalent: bad grades) achieves is not so much conviction, but communicating urgency. How to react to urgency a different matter.

A second social function of screaming (re-)producing simple us/them group identities.


100% agree, and I'd like to draw out a thread that is highly relevant to the original title. Most damage is done by large groups of people who want simple solutions to complex situations (like, eg, characterising a debate as having precisely two well defined sides).

There are some situations in life that are genuinely complicated (eg, running the logistic system that gets food from farms to houses). I think it might be flat-out impossible to communicate a complicated solution to a large crowd. The best I've ever seen a large crowd do is pick someone who looks like they might be able to tackle the problem and then the collective accepts whatever they get - good or bad.

Realistically, people piping up with some variant of "this simple solution will fix the problem", on topics where they have no actual skin in the game, are the problem. They undermine complicated efforts to resolve situations. Moderation, balance, compromise and attempting to deal with the complexity forthrightly come together to give much better results than using silver bullets.


   characterising a debate as 
   having precisely two well 
   defined sides
I see this as a reaction to complexity: if a subject is too complicated to grasp, comprehend and communicate all relevant subtleties, a common reaction is radical simplification down to two options, with a strong preference for one side.


Screaming is useful for warning something about something that's obviously going to negatively affect them. Like walking in in front of a train, or how cheating will make them fail a course because the screaming person will be the one failing them.

When it comes to more abstract concepts, the average person will ignore the screaming lunatics and side with the person that appears sane at first glance, even if they're really the crazy ones.


>Like walking in in front of a train

Agreed.

>or how cheating will make them fail a course because the screaming person will be the one failing them.

Completely disagree. In my life, there have been plenty of people for whom this is not effective. Screaming is good for urgent events that will have immediate and serious consequences. Failing a course is a long term consequence.

Also, beyond a certain frequency, screaming loses effect. I once transferred to a school that had lots of screaming and heavy corporal punishment. It was very clear: The screaming and punishment had virtually no effect on the students. It would reduce them to tears, and once the tears were gone, the behavior continued. In my prior school the frequency of such disciplinary measures was much lower, and almost always was more effective because it was rare.

So activists who are always screaming are destined to being ignored (this being only one of the reasons).


> and year in, year out, a large number of students ignore my calm, well-argued and perfectly rational advice

I would point out that they probably remember your advice, it's just not "connected" to the part of their brains that does the driving under stress of deadline.

I find you can make a lot of true predictions by extending the "system 1/system 2 thinking" model into a complete disjunction: that everyone is, internally, one person who listens and talks and learns social mores; and then another person who acts and reacts and learns by doing; and that—other than sharing a body—these two internal people have nothing in common and you should assume that any lesson that's been imparted "through" one of them has absolutely not been imparted to the other.

I.e., if you tell someone something, they'll be able to tell you what you said, and it will affect what side of a debate they engage in in the future, but it won't affect their behavior in the slightest, except insofar as they verbally precommit to doing something in a way that then forces doing-them to do it.

And, if you get someone to, say, play a video-game simulation of some complex system that imparts a particular lesson about that system into them, then they'll still verbally argue on the "wrong" side regarding how that system works, until someone essentially forces them to sit down and have their "social mind" go over the experience their "doing mind" just had, explaining it to themselves to convince themselves. (Some people probably do this "narrating themselves observing their doing mind" by default to some degree; these people are probably measurably better at some meta-skill like learning or teaching.)


> What screaming (or the university equivalent: bad grades)

Grades are both an expensive signal, and an adjustment to an incentive gradient: Opportunities to send the signal are limited, and their use is constrained by both written and unwritten rules.

Giving someone a bad grade and a way to do better redirects whatever value they place on grades into doing what you asked. It's like a performance bonus, or cheese at the end of a maze.

Screaming, however, is nearly pure signal. A relatively cheap signal, if overused. The only expense is trading off your reputation, in exchange for immediate attention. If you're screaming at The Outgroup, you have no reputation to lose, so there's no value to the signal.

To get the people outside Planned Parenthood to listen, you'd have to incentivize them the way grades incentivize a student--or at least give an expensive signal that you're worth listening to.


people often don't do what's best for them. But that's a separate issue from trying to convince someone to join your cause. If you're representing advocacy group X and you're being melodramatic and throwing a tantrum, rather than pinpointing the issues you want to solve and proposing solutions, I'm much more likely to think that said group is composed of adult babies making childish complaints and unworthy of respect or attention. You'd be essentially leaving it to me to independently discover if there is any value in the group, rather than taking the opportunity to inform me yourself.


They are not trying to convince you to join their case. Neither group. You are unlikely to be in position to do anything about their cause and even if they convinced you, you would likely remain passive.

Both are attempting to force change or prevent change. For all values in calm rational discussion pleasant discussion, it does not bring change. It makes you pleasant, but does not really bring results where real stakes are in question.


Are bad grades actually the equivalent of screaming? Or are they a simple consequence of students' actions?

That screaming produces group identities I don't doubt, but it seems unlikely that once you've converted them into them, you will be able to convince them of much.


> ignore my calm, well-argued and perfectly rational advice

That's quite one-sided...


"An interesting article in The Atlantic talks about studies showing that liberals think in terms of fairness while conservatives think in terms of morality. So if you want to persuade someone on the other team, you need to speak in their language. We almost never do that. That’s why you rarely see people change their opinions...

...If your aim is to persuade, you have to speak the language of the other. Talking about fairness to a conservative, or morality to a liberal, fails at the starting gate. The other side just can’t hear what you are saying."

http://blog.dilbert.com/2017/02/15/how-to-persuade-the-other...


I'm not sure Scott Adams has really grasped moral foundations theory. It explain the differences in the moral systems used by American liberals and conservatives - it does not claim that liberals lack a sense of morality.

His abortion example strikes me as particularly off-base. Again, it's not that US liberals don't care about morality. Of course liberals think murder is wrong! They just don't consider a fetus to have the moral standing of a human.

Anyway, some direct links regarding moral foundations theory (though I highly recommend Haidt's book, The Righteous Mind):

https://www.moralfoundations.org/

https://en.wikipedia.org/wiki/Moral_foundations_theory


Excellent article, thank you. Thinking back to a few discussions i've had regarding the current political landscape with 'the opposition', this (albeit oversimplified) method checks out.

Thanks again. Of course Adams has his own political bias but at least he appears to be trying to bridge the gap here.


> liberals think in terms of fairness while conservatives think in terms of morality

i don't think this is a very accurate characterization. for one thing, fairness itself is a moral value. however, i do think liberals and conservatives tend to have different moral systems. in broad strokes, liberals tend to have more of a utilitarian perspective, while conservatives tend to be more deontological.


Jonathon Haidt has actually done some great scientific work on the real differences between liberals and conservatives. This is a shortish version: https://www.youtube.com/watch?v=vs41JrnGaxc There's also some more detailed and longer versions if you poke around on YouTube a bit, plus a book you can buy if you're really interested.

I don't think your characterization of either side is particularly accurate. Neither side is utilitarian, and both sides are plenty deontological, just with different rule sets. Both sides are rationalizing deeply-held instinctual beliefs, or the lack thereof.


Everyone will give you lots of advice and requirements etc, and it's completely impossible to follow even a tiny fraction of it.


> People who criticize tone police are technically right that the content of an argument is what should matter, but that smacks of ignorance of how humans engage in conversation.

Absolutely nailed it. If you're unpleasant/uncivilised to deal with then the content of your argument becomes irrelevant because a decent portion - and perhaps even the majority - of people won't want to have anything to do with you.

Being obnoxious is another class of communication where good content is masked by bad delivery, the same as if you fail to communicate and explain your point clearly: again, the message gets lost. People switch off and disengage.

Good manners really do oil the wheels of communication on difficult and contentious topics even if it means the process takes longer than you'd like.


It also depends hugely on context. A well placed "you might have a point but you're acting like a dick" might bring a discussion back into civility, but then you also get people trying to shut down a discussion that they find uncomfortable by focusing exclusively on criticizing incidental word or phrase choice.


And then there's King's Letter from a Birmingham Jail: https://www.africa.upenn.edu/Articles_Gen/Letter_Birmingham....

Sometimes you need to make waves in order to advance a cause.

Now that doesn't mean you should be uncivil (like screaming "you fucking moron!"), but it can be a fine line to walk. So it's understandable that tone policing itself can become a controversial topic.


It's proportionate to the cause.

If we're talking about the Jim Crow south, that's one thing. King even wrote a whole essay about how white moderates 'tone policing' the civil rights movement needed to get on board or get out of the way. And he was right. Emmett Till's lynching had just happened, they were being categorically denied rights, and then getting fucking attack dogs unleashed on them, churches firebombed, etc, when they spoke up.

Some of the noisier people on twitter and in our workspaces in 2018, however... their net effect is giving talking points to Sean Hannity. At least 80% of America is turned off by their rhetorical excess.

King's body of work was productive. He was after results, not after feeding his own ego or outrage.


> King even wrote a whole essay about how white moderates 'tone policing' the civil rights movement needed to get on board or get out of the way.

It's that very letter, isn't it?

"I have been gravely disappointed with the white moderate ... the Negro's great stumbling block in his stride toward freedom is ... the white moderate, who is more devoted to "order" than to justice"


The problem isn't any different in 2018 than it was in 1955. The MAGNITUDE is different, but the magnitude is always different.

In 1955 there was a higher murder rate and a higher rate of disease. Just because both are lower in 2018 doesn't mean we don't continue to invest significant effort into working on both murder and disease.

In 1955, Jim Crow laws were appalling and inhuman. Lynchings were accepted by large segments of society. In 2018, we still have strong systemic bias in police forces that handle black criminals with a significantly higher rate of violence than white criminals leading to many preventable deaths. And we still have a justice system that punishes black criminals more severely than white criminals, affecting community cohesion and health.

These are still serious problems that need significant effort into solving. That is 99% of what the noisy people on twitter and in your workspaces are talking about. But Sean Hannity and Co pick up on the 1%, or misintepret the 99% and present it as some rhetorical excess about feelings and safe spaces and outrage.

No. It's still about human lives.


1) You're trivializing just how bad things were, a little bit.

2) I'm not arguing that racism doesn't exist. I'm arguing that the outrage-addicts are ineffective, and that they suffer from a bias towards slacktivism and purity-gating.

Contrast with King and especially LBJ. Making change requires building bridges.


The Dean Scream stuff was a dirty tricks attack by his opponents. If he didn't scream they'd attack him for something else, loker being too calm (see Jeb Bush's 2016 campaign). If you libe your life trying to avoid giving your unethical enemies anything they can twist into an attack, you'll paralyze yourself.


> See what Howard Dean’s screaming did to his campaign.

I totally agree with everything you said, except this isn’t a fair or good example of what you’re talking about at all. That meme was a manufactured attack. Dean was rallying, and if you watch it in context, there’s nothing there. The clip was taken out of context to make him look irrational. You can do that to pretty much any politician with videos from rallies.


> If someone is screaming at me and ending every sentence with, "you fucking moron!", ... I'm going to assume emotional outbursts will be sent in return to any responses I care to make.

This is a good reason not just to avoiding debates with screaming people, but also to ignore them.

People are sometimes wrong. Happens to the best of us. Other people give us feedback. The screaming person who responds to feedback by more screaming... is more likely to be wrong.

In theory, one could first spend a lot of time debating patiently and politely with as many people as possible, weigh the arguments, and determine what is most likely to be true. And then, start screaming at people, to spread the message quickly to many people. But in practice, people who are screaming now were most likely also screaming yesterday.


People outside of Planned Parenthood screaming "murderer" aren't try to convince, they're trying to intimidate.

Howard Dean's scream wasn't. The scream was no louder than the audience: it was merely isolated by a directional mike.


Tone policing is to do with (objecting to) not what they say, but how they say it, so I think what you give aren't examples of it at all. What you say about behaviour makes sense, but I don't think you read the wikipedia link to learn what tone policing actually is, as you're complaining about something else entirely.


Or more succinctly put - context matters.


Here we go: "Why Diversity Programs Fail" https://hbr.org/2016/07/why-diversity-programs-fail

There's a ton of data. Compulsory implicit bias training, for example, will most likely be bad for diversity (and whether implicit bias exists itself is another whole kettle of fish).


Note: that article also gives advice on how to succeed.


Interesting to compare that article to the Damore memo.


It's got to be a balance between niceness and driving a point home.

If you sound like a shouty crank with a weird idea then good luck trying to get public + politicians to make said idea happen.

It seems the internal dynamics of political groups can reward being shouty and extreme. So yeah I can imagine the message doesn't go down well there and is met with: 'yeah but X is SO important'.


There is a difference between the fallacy of “tone policing”, your argument must be invalid because you are yelling or rude, and this article’s point of your argument may be ineffective or counter productive if you are yelling or rude.

In practice, it’s pretty easy to tell the difference.


The other aspects of situation are a.) people who are in fact in control of their emotions, but any way to express issues they have is seen as impolite or they are expected to use euphemisms instead of factually correct labels b.) unequal expectations placed on politeness.

Both are big issues and especially their combination can give huge advantage to one side. Their combination can effectively amount to expectation that one side is submissive to the other.


>You could say you should criticise

I think the problem begins with the notion of should. And more broadly the notion that people have obligations. On top of that, people use the word should in a lazy manner. They don't want (or are incapable) of having a (significantly) deeper discussion on the issue - it takes a lot of work. To avoid that mental burden, shoulds come out.

Note, I'm not merely referring to your use of the word, but the larger idea that both sides have. Take this:

>but the problem is not the people who try to tone police you, it's the people who don't like your tone and don't criticise you for it - they silently just ignore your cause or vote against it.

As a general rule, when someone who thinks differently from you starts using the language of obligation, it will raise the defenses on the other side. The expected outcome is one of fight (counterargument, or merely raised voices), or flight (the behavior you describe).

There's likely a hidden obligation implicit in your comment. That I am somehow obligated to learn about wrongs being done to others, and obligated to help them (via voting, activism, whatever). And so the tower of obligations gets higher and higher.

In the last two years, I've read 3 books on effective communication, and they all describe this - one goes all the way and suggests you stop using words like should. Others are more nuanced but are saying the same thing: As long as your posture is that of "I know what's right" (which "should" automatically confers) vs "Let's explore and see how I might be wrong", you'll have this problem. (Internally you may be open, but what matters is your showing it - often the outward posture is different from your inner state).

In my experience, and in the experience of the authors, the behavior you describe is what is to be expected. So labeling them as the problem may, on some higher plane, be correct. But if your goal is to actually effect change instead of categorizing, the approach is ineffective.

The books suggest the reader try to be above all of this. They urge the reader to ignore the tone and focus on the message. But they also emphasize that expecting others to reach this level is expecting too much.

BTW, lest I be misunderstood, neither I nor the books are saying there isn't anything like an obligation, and that you should not have shoulds. Of course everyone will have them. The key is to realize that your set of shoulds will be incompatible with others. You could have a long, multi-day conversation with the other to align your sets, but if you just change the style (and tone) of the conversation, you won't need to.

Tone policing is a broad concept. For some who do it, it is very sincere advice. For others, it is a way of not dealing with the issue. Don't fall into the trap of lumping the two crowds in the same boat.

Finally, there's another reason people silently ignore or are turned off from aggressive tones. Many activists have the mentality of "Always find ways not to be ignored" which often translates to "Shout louder and be in their face" (literally have heard some say this). Their whole strategy becomes one of "Escalate till you can't be ignored". This rubs many (most?) people the wrong way. Without even knowing anything about the subject at hand, a lot of people will automatically switch to "Oh yeah! Let me show you how well I can ignore you!"

There's a maxim in the field of negotiations: The more pressure you apply, the stronger the wall the other will build around themselves. This dynamic is entirely independent of the message. Their advice is always what you may not want to hear: Change the posture (including the tone).


The bit I struggled with is that in many fields, either the majority are wrong/don't have the solution (Argumentum ad populum) or otherwise listening to expertise/experience can either give you too much input or, again, implies that there is more chance that experience will give you a better solution to a problem. In their example, if Dr. Ignaz Semmelweis had asked his more experienced peers about approaching the problem of infections, they would have told him he was barking up the wrong tree with hand washing.

The really difficult part of any venture is knowing how to distinguish wise advice from just another opinion.

Many great solutions come from people with "crazy" thinking and I would expect they could have caused great damage (or perhaps have - jet engines) but otherwise we would be moving very slowly as a planet?


The downside is that most "crazy" thinking produces crazy results.

Take Thanos: with one snap of the fingers he destroys 50% of the intelligent beings in the universe and ends all of the problems associated with overpopulation, right?

But, to quote a story I heard somewhere, if you have a coke bottle in a landfill growing bacteria, who double in population in 30 minutes, such that the number of bacteria will overwhelm the available resources at 12:00 noon, when is the bottle half-full? 11:30. At 11:00 they look around and 3/4th of the bottle is free. If they find another identical bottle next to them, that prevents the crash...until 12:30.

Thanos is a complete idiot.

On the other hand, sure, the majority may be completely wrong, but if you ignore them, your solution probably isn't going anywhere.


Are you saying that Thanos just delayed overpopulation for a constant amount of time, and didn't really solve anything?


No, he's saying that Thanos just delayed overpopulation for a constant amount of time, and didn't really solve anything.


Yes, I'm saying that Thanos just delayed overpopulation for a constant amount of time, and didn't really solve anything.


Unless the universe will only continue to support life for less than that constant amount of time. A temporary fix is permanent, for sufficiently large values of "temporary." (See also: UUIDs.)


> The really difficult part of any venture is knowing how to distinguish wise advice from just another opinion.

The usual advice is to choose a trusted advisor with lots of experience in the field, listen to and follow his advice and see if it works. If it doesn’t, adapt.

Every other way is wallowing in indecision, losing time. A decision won’t get better with ten opinions of which more than half are unfounded or even incorrect to begin with.


I find the claim that the majority are wrong in fields tough to believe. Not impossible, mind. Just tough.

I would think it matters on what is "right." Are they wrong in the way that Newton was wrong? Still far more correct that anyone else? (Obviously, not to that degree.)


Take, for example, an electronics or hardware project. The majority is a bunch of experienced engineers who will tell you that your project will take years based on their own experience with an earlier project. What they neglect to account for is that their earlier project took years because of the state of the art at the time. They're taking experience that applies to the era of NAND flash and board bring-up and applying it to the era of eMMC and main-line Linux drivers. So while their estimate is very conservative, it is also very wrong and you should take it with a grain of salt.

Any judgement that relies on some external context, like the industry state of the art, can no longer be trusted in a different context.


The hardest part of talking with experts is to make sure they answer the same question as you want answered.

"How quickly can we get something that more or less works for a demo?" is very different from "When will this be ready for mass production?" They can't read your mind and often to consider context you are not even aware of.


There is also a bit on the what question are people answering, aspect. How long will it take to take pretty much any electronic product to production? Quite a while. How long would it take you to build a prototype that you can play with? Probably not long at all.


The majority are not wrong by definition but the fallacy is that you cannot assume that the majority are right.

As far as knowledge is concerned, was Newton right for a period until we learned about quantum mechanics or was he always wrong? Was he always right but only within limits that he couldn't know in advance?

Apologies for the philosophy but this is the real complexity of an article about avoiding harm by doing various "good practice" stuff!


“He who knows he is a fool is not the biggest fool; he who knows he is confused is not in the worst confusion. The man in the worst confusion will end his life without ever getting straightened out; the biggest fool will end his life without ever seeing the light. If three men are traveling along and one is confused, they will still get where they are going - because confusion is in the minority. But if two of them are confused, then they can walk until they are exhausted and never get anywhere - because confusion is in the majority.” ― Zhuangzi, The Complete Works of Chuang Tzu


I don't know if I would agree that the majority can't be wrong by definition. I just find it an odd and strong claim. I suspect I misunderstand the point.

What is the claim "the majority are wrong/don't have the solution" meant to support? If I just emphasized on the "wrong" instead of "don't have the solution", apologies.

For Newton, he was always wrong based on his equations abilities to predict everything. He was just closer than many before that point such that it was undetectable for a long time.

Which is my point in asking. If folks are correct in their predictive and application based metrics, it is somewhat silly to belabor them being "wrong" in some absolute sense.


Not sure anyone is still following here. For a fun example of an entire field being wrong, look into the history of the causes of ulcers.


I wonder if we will ever know everything about how matter works or if it will be a long series of people who are merely less wrong.


This is the best argument I have heard for absolute freedom of speech and rugged individualism I have heard in a while. Thank you!


> Many great solutions come from people with "crazy" thinking and I would expect they could have caused great damage (or perhaps have - jet engines) but otherwise we would be moving very slowly as a planet?

I agree. There's something I like to call Asimov's principle[1]: knowledge almost never does harm; the answer to poor or incomplete knowledge is almost always more knowledge (corrections, extensions, alternatives) instead of forgetting.

And there is the possibility of trying to forget in case a knowledge would be so absolutely harmful -- so the downside is practically bounded while the upside is practically unbounded. If you try to hide something harmful it's always possible it will be rediscovered later in very poor timing and the benefit of countermeasures won't be available. If we refrain from discussing AI safety in fear of derailing some holy discussion by the wise sages (what criteria would make anyone good enough to disrupt it?), it seems it'll be more likely when potent general AI emerges we're not ready.

If Einstein tried to hide the mass-energy equivalence, or say all of his theories because of mass-energy equivalence, then when someone later discovered it could be much worse -- if atomic bombs were discovered in the cold war (discoveries usually start unilaterally), one side could very well have started WW3 (in fact the US nearly started a war with USSR in the short period they were the sole possessors of the bomb). The fact is it is extremely hard to predict the impact of any individual action, while it seems quite safe to say that in general thoughtful action is usually benign -- this suggests a strong benefit to discussion and discovery of knowledge versus hiding in fear.

An important principle I would suggest instead is commitment to truth. You can make poor arguments, you can be wrong, but as long as you're committed to truth even the incorrect arguments might prove useful -- they might lead to stronger counterarguments, elucidation of fundamentals, etc.

To exemplify, one of the individuals that perhaps most advanced our understanding of Quantum Mechanics was again Einstein, which was a great critic of it -- his criticisms turned out to be all wrong, but they were so strong (intuitively seemingly right) they brought to light the most interesting features, 'weirdness' of the theory. Even for relativity one of the most useful ways of grasping the theory and its implications is by examining "paradoxes" -- which are essentially failed counterarguments.

This failure of commitment to truth is where climate change deniers ("""skeptics""") go wrong -- it's not trying to prove consensus wrong that's harmful, it is failure to adjust in face of mountains of evidence. Reasonable skepticism probably wasn't so harmful when its impact was less clear -- we've improved our models, measurements, etc to address it.

[1] He makes essentially this argument as a foreword to some of his short stories, can't remember which one exactly. I believe it was more or less along the lines of: science and technology (brought by knowledge, discovery) can often be used for great good or great harm, but to reliably avoid the great harm (that could also come from inaction) we usually need more knowledge, more discussion.

And note how my examples are for very impactful work in the brink of wars and political stability, and still discussion and knowledge seems to have been positive. How many people really should worry about the risk of triggering catastrophe from their daily jobs? Some of the points in the article might be applicable, in very restrict cases -- basically if you're dealing with catastrophic scenarios (how many people routinely are exposed to that?). If you ever find a flashing red button while alone in a power station, or find a break for a major cryptographic protocol, you want to triple check with specialists and be very careful. Otherwise it can turn into futile paralization by fear (which is harmful to yourself and others).


There are a couple of other issues that could be added to this list.

Credit

One of the sayings that I know to be (mostly) absolutely true is, "there is no limit to what you can accomplish if you don't care who gets the credit." I have a (much more successful, and respected) friend to whom it is something of a mantra.

On the other side, seeking credit has a lot of pitfalls. The obvious one is taking credit for someone else's work; that's just bad and leads to bad results. But further, by aggressively taking credit for things you've done, you can actively force other people out of the field you're working on. Has anyone been bitten by the off-handed, "yeah, we took a look that several years ago" comment?

Further, becoming the face of some project means that you with all of your warts hanging out come to represent the project and its goals. Take care.

Goals

Choose the goals of your project carefully. For one thing, they can take on a life of their own. On one hand, our modern financial services system has the excellent goal of allocating resources where they can do the most good, but they have become so complex as to be a maze with great freakin' bear traps all over the place.

Then there's opportunity cost. Some goals are laudable, but take on too much emphasis at the expense of other, more reachable, more effective goals. Take the "reducing extinction risk" mentioned (repeatedly) in the article. Sure, somebody should probably worry a bit about the risk of human extinction, but...

"Many experts who study these issues estimate that the total chance of human extinction in the next century is between 1 and 20%.

"For instance, an informal poll in 2008 at a conference on catastrophic risks found they believe it’s pretty likely we’ll face a catastrophe that kills over a billion people, and estimate a 19% chance of extinction before 2100."

The risks they came up with are, molecular nanotech weapons, nanotech accidents, superintelligent AI, wars, nuclear wars, nuclear terrorism, engineered pandemics, and natural pandemics. (I'm surprised; global warming didn't make the list in 2008.)

Here's the dealy-o, though: what actually is the risk of human extinction before 2100? 19%? (1 in 5, really?) Their conservative 3%?

"Nanotech" currently is at most an OSHA problem. (Don't breathe in the microparticles!) The risk of "grey goo" is likely pretty damn low, given the history of the last 30 years of nanotechnology. (First thought on hearing of the possibility of nano-machines? "Ya mean, like proteins?")

Conventional wars don't actually kill that many people; they tend to disperse too easily. I'm even given to understand that the effect of major wars is an increase in the rate of population growth. Nuclear weapons, on the other hand, are very, very bad...for cities. But they're unlikely to do anything noticeable to people in sub-Saharan Africa, South America, the Australian outback, or Mongolia.

Pandemics have been a problem before, they'll be a problem again, but I'll let somebody else describe the problems with an infectious agent capable of killing all of its hosts. Likewise, climate changes have been problems before, and have led to bad outcomes. But killing everyone isn't ever been on the table. And for AI, I'm more concerned with the AI that runs your car off the road because it's not actually able to perceive the lane markers.

Individually, each of those is bad. They'll possibly kill billions of people and possibly lead to the collapse of civilizations---some of them have done so before. But complete extinction is incredibly unlikely and "ending all life on Earth" is just silly.

But human extinction is an issue that will get attention. It'll sell newspapers. And more than some minimal level of resources spent on it means less resources for other issues. Like, say, identifying and addressing actual problems with nanomaterials or wars or infectious agents.


I agree, humans are probably the single hardest among multicellular life to entirely wipe out. Not because we're hardier than cockroaches, or hibernate when it's cold.

There's just so many of us, and every last one of us would be hellbent on survival. At this point in our technology and understanding, I bet there could be survivors of extinction level events equaling the K-T extinction that wiped out the dinosaurs.

People already have bomb shelters that can last them months; feeding people does not require sunlight, only dirt, water, lamps and electricity. Nuclear winter can't stop the flow of electricity, survivors of any event can jerry-rig surviving wind turbines, nuclear power, geothermal, or hydro to create life sustaining electricity.

Would the survivors be smart enough to accomplish such feats? Of course, assuming they could read. The internet may collapse but the abundance of printed material, even doomsday vaults containing encyclopedias can provide for entertainment and education in long post-apocalyptic nights.

These things exist: https://www.cbsnews.com/pictures/amazing-doomsday-bunkers-of...

When people fret and worry about the fate of the human race.

I'm not worried. People will adapt and figure it out like they always have. The only thing people need to worry about is their own survival in harsh times.


> survivors of any event can jerry-rig surviving wind turbines, nuclear power, geothermal, or hydro

Excepting nuclear, where do you think the energy for those things comes from?


I feel weirdly (perhaps irrationally) about the effective altruism people including 80,000 hours. I just can't shake the cult-y vibes they give off. Am I alone in this?


I'm pretty sure that it's the movement is a net positive for the world, because it attracts a different crowd (younger, STEM, etc) than altruistic endeavours traditionally do.

The basic drive to empirically evaluate one's actions is also sound, had been simmering in the "traditional" community for quite a while, and would have become more relevant in some way or another (eventually). If you look the output of GiveWell, to cite just one example, their deep dives on policies and charities make for fascinating reading.

Yet, of course, any such trend is bound to find people overdoing it, or cargo-culting it, or whatever. Just look at the split within the community to see this perfectly illustrated: there's one group that wants to focus exclusively on AI risks, because they assess it as having the potential to wipe out humanity (rendering the probability meaningless in their calculations).

And there is another group that wants to focus on animal suffering in the wild. Tadpoles, this group says, die in the billions each year, and the difference of their nervous system from humans' is just a question of degree.


> And there is another group that wants to focus on animal suffering in the wild. Tadpoles, this group says, die in the billions each year, and the difference of their nervous system from humans' is just a question of degree.

I don't want to redirect all of human effort to reducing animal suffering, but I would sleep a lot better if I were sure these people are wrong.


I was a little skeptical at first as well but I initially got involved when doing a career quiz and then by helping out with the Estonian chapter of Effective Altruism and I'll say that at least on an individual basis they're quite open to other thoughts and opinions and acknowledge that what they believe doesn't necessarily follow for everyone (ex. the idea that you can 'earn to give' - it only works if you have the personality type where you can do a job such as Ibanking and can satisfy your need to do good by giving and not actively participating as much)


I have the same feeling. It concerns me because I'm not sure how to differentiate between my subconscious conservatism and my instinctive backlash against cultishness and/or over-complication.

The only thing I can put my finger on is this: those organisations - and this article - present themselves as being above-average rational and insightful. At least that's my impression. Yet many of these suggestions seem to be speculation based on very small data, for example the point about them not having competitors. Many others could simply be rephrased as "take a holistic view", for example considering whether recruiting people to your company is good for the cause as a whole.

Don't get me wrong, there is some insight here, it's well-written, and some of these heuristics might turn out to be helpful to some people. I just think that the website it's published on is inflating its perceived significance.


The history of the movement started in the Oxford Uni philosophy department. I could be wrong but having read a couple of the early books, I think the focus started much more on the ideas of: 1: you have an equal moral obligation to do good for all humans as you do for humans around you, having one without the other is not really logically justifiable 2: some charities are orders of magnitude more effective than others.

I agree somewhat with your sentiment with regards to where they are now, with a lot of their focus on far-future risk avoidance. That stuff can't really fit, because it can't be reasoned about very precisely/scientifically. It's gone from a movement with emphasis on measurement and scientific methods to having a lot of emphasis on things that can't be measured. I think part of the problem is literally it started with some of the brightest guys around and as it gets disseminated to general pop, they lose and distort the message.


Ask yourself this, do you disagree with the basic premise of the article, that the law of unintended consequences applies to charitable endeavors? If not, do you disagree with the ways they say it can manifest itself?

Now, ask yourself, if you don't disagree, why did you write this?

I know from actual experience that a large number of people, when their actions do harm, fall back on "I was just trying to help." I wish more people read, and following, this kind of advice.


It should be clear from my post that I don't directly disagree with the premises of the article. I'm questioning its significance, novelty, effectiveness, and interest.

> why did you write this?

I was exploring a shared, unexplained feeling of discomfort. This is an intellectually and emotionally fulfilling thing to do.

> I know from actual experience that a large number of people, when their actions do harm, fall back on "I was just trying to help." I wish more people read, and following, this kind of advice.

In my experience those people are not the same types that would read this article, nor do they seem to be the target audience. The target audience appears to be leaders and employers in "fragile fields".


It wasn't clear. My theoretical questions were to get you to answer your own question, which you started off with:

> I have the same feeling. It concerns me because I'm not sure how to differentiate between my subconscious conservatism and my instinctive backlash against cultishness and/or over-complication.


Nope, same here. Feels a bit (a bit) like religion for atheists.

For example I don't dislike Nick Bostrom but I feel like he's being quoted so uncritically in this article - not totally unlike how a few years ago some HN commenters could quote Paul Graham essays as holy scripture.


In short, no. I don't know how to put it in long.


I get that feel from anyone who cares passionately about something. Especially when that something has a deep philosophical core that challenges us to cut away our programmed/instinctive behaviors and critically analyze to find meaning in a universe that may not have any. Existentialism leads to ennui.


My bigger issue is the slight whiff of... dare I say it? Privilege?

Look, don’t get me wrong. Publicising good causes and helping people direct their charity can do no harm. I won’t criticise that. But instructing your readers to do economics PhDs, or join an investment bank, or take an internship at a thinktank, is assuming they enjoy some enormous opportunities.

Most people cannot do those things, not even if they went to Oxford. They cannot do an MBA or live on the subsistence wages of the modern academic. Like most human beings in the history of civilisation they can only choose how to be instruments of power and capital, which often means doing things only indirectly related to the public good.


They are aware of this, but focus on that small portion of the population anyway because the people that can go work at an investment bank provide a ridiculously disproportionate bang for their buck.

Tangentially, I think you're underestimating what a typical Oxford grad can do. Most of them absolutely can do one of those things, especially if they decide to before they graduate.


I went to Cambridge. No way could I have afforded an unpaid internship at a thinktank after I graduated.

Possibly I could do it now, but even if I obtained that golden policy job - how would I survive in London? Housing here is more expensive than even San Francisco.

My degree has opened many opportunities - sure. But I can't pay my rent with it.


No... I, for one, would really like to know what the actual (measurable, practical) effect of effective altruism is. I definitely hope that it's not just like, we do nothing but more effectively.

I likewise question the efficacy of efforts like eradicating malaria. I can't help but feel all that money would be better spent on technology (not startups, but like fusion and other energy research, nanotech, space industry etc), even if for-profit. Like, no mater how many Africans you can save from malaria, they will keep dying or living shitty lives because of poverty, famine, wars... Only technology can actually change the world (and politics, but we can't really seem to be able to do much about that...).


Malaria is one of the biggest issues preventing Africa as a continent from going forward. Where it’s worst, malaria contributes significantly to childhood brain damage, causing untold amounts of lost economic growth as generations after generation are stunted for life with the disease. 12 billion usd annually is spent on directly treating malaria patients, with many times that being lost due to long term damage. [0]

Because of this, I’m very skeptical of the idea that dealing with malaria isn’t one of the most effective use of resources in improving countless lives right now.

[0]https://www.cdc.gov/malaria/malaria_worldwide/impact.html


https://blog.givewell.org/2018/06/29/givewells-money-moved-a...

I feel like that pretty comprehensively answers your question. Givewell is a[n ea affiliated] charity which empirically evaluates the effectiveness of other charities, and aims to convince charitable givers to give to those they deem most effective. They've had a lot of success in this goal. Their top charities are listed and in depth reasoning for their evaluations are available.


Ultimately this can all be boiled down to the idea if working smarter not harder. Unfortunately mankinds definition of smart varies as as wildly as their actual intelligence


> working smarter not harder

Not even. I think it's more about, what to work towards - i.e. the goal. I mean, feeding hungry people is definitely a worthwhile goal, but... you have to do it again, tomorrow. It's not a scalable solution.


What if feeding hungry people today is what it takes for them to feed themselves tomorrow? Africa currently has millions of entrepreneurs, and half the rare earth minerals in your computer's chips and a plurality of the beans for the daily cup of Starbucks you need to do your vastly more lucrative job with it come from there. Not only is the people of Africa not dying (or dying less fast) something that can pay dividends in the future, you directly benefit from it. (Most of the aid we send to Africa is not even remotely altruism, effective or not, although most of it is also not very effective.)

I think the goals of NGOs working in that space are generally in the right place. It's true that they seldom do anything scalable in the sense of transformational research or marked improvement in processes, but the structure (of funding) somewhat prevents them to. I think the most scalable thing we can do for poor countries right now is make information, education and food and essential medication as available and as cheap as possible for as much of their population as possible, and let these children's children save themselves.


The point I was trying to make is that the order of operations matters. But we can only judge the outcomes of our efforts based on their effects not their original intent


EA is, if you read between the lines, a program to maximize deaths from starvation over other causes. Also a form of money laundering, the number of tiers of EA organizations allocating money is always growing. The money goes to employees of charities increasingly.


"cataloguing these risks is important if we’re going to be serious about having an impact in important but ‘fragile’ fields like reducing extinction risk"

I certainly hope that I'm mistaken, but that sounds awfully a lot like a pro - eugenics bit of propaganda.

Personally, I would suggest that people with a clear intent toward 'love thy neighbor' rather than 'We're the best set of people to fix your problem for you' is best for humanity.


I can tell you that the "extinction risks" that EA people are likely to be concerned with are things like nuclear war, runaway AI, runaway bioweapons, catastrophic feedback loops related to global warming, etc. The "extinction" they're referring to is that of the entire human race.

If you thought they were referring to something else, I wonder where you got that idea.


Imagine a first year medical student who comes across a seriously injured pedestrian on a busy street, announces that they know first aid, and provides care on their own...but imagine that a passerby who was about to call an ambulance refrained because the student showed up and took charge

Then he's not following his first aid training. One of the first things they taught in my first aid class is to send someone for help first -- find someone in the crowd and tell them specifically to call 911. Don't just tell the crowd in general to call for help or everyone may assume someone else is doing it, direct someone in particular to call.


That reinforces the article though.

"You take on a challenging project and make a mistake through lack of experience or poor judgment"

The student thought they were doing good, but they did not ask someone to call for help due to lack of experience / poor judgement, worsening the situation potentially.


There is a German word for this: verschlimmbessern. This roughly translates in English to "imworsenprove". :)



They have a word for everything!


Because the desire to do a good deed causes people to discount the ramifications of detrimental effects by rationalizing with the “it was for the best” adage


The thing I hate most is the "We had to do something" statement, justifying the horrible outcomes they created in a vain attempt to do a little good.

No, you didn't have to do anything. Doing nothing is actually sometimes better than doing something stupid.


Which is why Avengers: Infinity War (Thanos) was the only Marvel movie I ever liked.


A good example of that is Germany.

They removed Nuclear from their energy sources and now need to use coal instead to compensate.

But it gets better than that.

Denmark focused primarily on wind energy with the consequence that this summers heatwave resulted in them having to get their energy from Germany who now without Nuclear had to use even more coal.


> They removed Nuclear from their energy sources

Nope, it's not 2022 yet [1].

> and now need to use coal instead to compensate.

Looks to me that wind/solar is compensating for the gradual decline in nuclear, and also for the bulk of increasing demand [2].

[1] https://en.wikipedia.org/wiki/Nuclear_power_in_Germany#Closu...

[2] https://www.cleanenergywire.org/factsheets/germanys-energy-c...


It's also coal.


The article seems written in a "gentle reminder," tone that implies a basic attribution error about the sort of people they are reminding. As a corollary to a famous adage, I would suggest that it is best not to attribute to ignorance what can be explained by incentives.


One that has gotten me and some of my cohort is the idea that if you have a below average income you have to live in an ugly run down neighborhood.

But a few too many beautification projects and property taxes start to go up. Now you’re pricing people out of their neighborhood. With renters they don’t even get the benefit of selling their house.

Most of the solutions I can think of could be easily gamed by speculators, which makes me wonder if the others just have a flaw I don’t see.


This only happens in unique context, though. If you were in a theoretical environment where there wasn't any demand for beautified neighborhoods the value of the housing wouldn't rise if you beautified it.

The circumstance though isn't unique and is applicable in a lot of different markets. Investing in almost anything makes it more valuable. Be that housing, a country, a company, a farm or a person. That is why education is such a potent force multiplier.

Housing though has an elephant in the room being how bad the situation has gotten. Its the same class of problem as healthcare because both aren't luxury optional goods. People need housing, and that housing need requires parameters that are in conflict with one another. The worst part is that because sustainability of housing trumps all other aspects of it that means you can easily find yourself in an unsafe, insecure environment out of necessity. Beautification makes the living more enjoyable and prosperous but is trumped by the cost benefit analysis of living there. When people are already sacrificing personal safety for job security they quickly get pushed out if you make the housing more valuable for any reason but enhanced job security.



> Everyone understands that one risk of failure is that it tarnishes your reputation. But, unfortunately, people will sometimes decide that your mistakes reflect on your field as a whole. This means that messing up can also set back other people in your field.

I find this to be the case with non-profits and others attempting to make big pushes to promote local entrepreneurship. I have objected to several attempts based on the organizers focusing on the wants of donors (not the entrepreneurs), lacking any marketing plan beyond "if you build it they will come" and not having long-term plans for sustainability. Every failed attempt sets back the next one by 2-3 years but no one wants to listen to a naysayer.


Interesting article, but I would have loved it if they touched upon how to effectively follow these principles in a world where there's competition. For instance, getting feedback, thoroughly vetting ideas, etc., is a good practice. But if you are competing against those who exploit the limited attention of investors/peers to quickly get ahead (in the short term), they _could_ get an enormous first-mover advantage. How does one deal with that?


The guide they're describing is for playing a pro-social game where everybody is trying to score points for the common good (i.e to save/better lives). In a competitive environment that is zero-sum in the short term has different rules.


"Imagine a first year medical student who comes across a seriously injured pedestrian on a busy street, announces that they know first aid, and provides care on their own. They’ll look as though they’re helping. But imagine that a passerby who was about to call an ambulance refrained because the student showed up and took charge. In that case, the counterfactual may actually have been better medical care at the hands of an experienced doctor, making their apparent help an illusion."

I think this is a poor analogy. Or maybe it's an excellent analogy for the dilemma the article glosses over: would the "right" answer be for the medical student to not help someone very obviously in need? Ambulances don't materialize instantaneously out of thin air, nor do they magically teleport themselves (and the patient inside) to a hospital. That hospital might be overcrowded, delaying treatment further.

Or the med student could actually try to help (or at least triage) and at the same time call an ambulance.


It is a very poor analogy given that step two of first aid after evaluating that one of the first steps is to call emergency services yourself or direct someone specifically to call (targetting bypasses diffusion of responsibility).


I think a lot of this boils down to having the right amount of introspection and insight on the part of the individual.


True introspection turns into a math puzzle when you want to go really deep. (It requires constant monitoring and logging of your own mind, and requires an insane amount of tests against self deception if you want to do it properly. Both of which are constrained by memory and compute power, which will likely not be sufficient if for no other reason than that you need to allocate some of it to other priorities.) I think it's a sign of wisdom to admit that it's too big of a challenge for an individual to rely on as a substitute for resources like this.


Well I read a bit, then thought "things like being verbose, generic, and over-guarded in offering advice and so wasting lots of readers time without making an impact on their behaviour"?

If they'd given one direct piece of easily actionable advice the actual impact might have been far larger ...

"So what?" - a Church sermon I heard well over decade ago, the speaker said basically if there's nothing that listeners will remember and act on then your sermon is moot. You can give a tonne of great advice, but sometimes less is more impactful.

Perhaps they do that to. The title gave me high hopes.


80,000 hours appeals to a particular type of person, and I think this is good but vague advice for that sort of person.

Reputation and social standing matters, you have to network, be humble and learn from people older and wiser than you, don’t try to go it alone, be willing to work on smaller challenges — these are all things I think people in this specific subculture probably would benefit from hearing.


There should be no shame in admitting you made a mistake - I've made several hundred thousand dollars in mistakes - sometimes they turned out to be the same mistake twice - but I owned up to them, and worked harder to make it right.

I've been bit often by:

Poorly Defined Requirements

Incomplete Understanding of the Problem

Incomplete Understanding of the Solution

Customer Created Access Difficulty

Customer created process issues

Poor documentation or data inputs

Excessive Complexity Customization driven by customer needs

I'm getting better with time, and I'm better at determining what the actual requirements are, and I now more fully understand what my solution is capable of too.


There's no shame, but it is still a signal.

Similar to the study a while back of whether VCs would rather fund someone who's a "natural" at what they do or someone who's had to work hard to reach that "natural" level. (It's the former.)


I get replies from Enterprise support reps of this sort. They vaguely address my question without commiting to an answer or solution.

That way the onus is on me to guess what might work for my case, and meanwhile they can mark the issue resolved.


That's terrible. A support agent should never mark an issue resolved until the client has said the issue is resolved or been unresponsive to followip communication for some time.


Unfortunately in practice, the lowly paid support agent is being bean counted on the number of outstanding tickets they have, and being told to periodically fix that problem.

"You get what you measure."


At least the enterprise support people have a harder time just lying and making up stuff in order to close the ticket, as I have experienced consumer support reps to do.

https://news.ycombinator.com/item?id=18072989


Luckily, they do state the lessons to be learned up front:

> In brief, it raises the importance of finding good mentors, consistently seeking advice and feedback from experienced colleagues, and ensuring you’re a good fit for a project before you take actions with potentially large or long-term consequences.


Well, but that is not what the title presents, just the dual (what to do if you want to help).


Keep reading until you get to the "how to mitigate these risks" section.

But I have to agree that the article could have been written by Chidi Anagonye. It had that "no matter what you do, you're always wrong" sort of vibe.


Totally agree on the case for focus and efficiency.

I'd also add that much of what we hear or read has slight effects on us that can be profound over time.

In the case of books, nearly every book I read changes my outlook and direction, sometimes imperceptibly, but I believe the effects can be large over time.

Of course, quality matters - "you are what you eat" or in this case what you read.

Reminds me of a quote by Charles de Gaulle:

"Don't ask me who's influenced me. A lion is made up of the lambs he's digested, and I've been reading all my life."


I think 80000 hours targets a fairly specific audience. Their style is in general to be in depth and long winded.


I’m a long-time follower of 80000 Hours. I agree with your comment, but still found this particular post to be far too verbose. The advice borders on non-actionable far-mode thinking.


I haven't yet read this one, so it is certainly possible.


I'd expect an organization dedicated to making the most of the finite time people have wout care about not wasting so much time.


I was hoping we'd get a list of times people tried to help, and actually hurt etc. Felt the same way you describe after reading the article.


You did? It was broken down into types-of-failure and each type of failure had examples/hypotheticals of where this type-of-failure led to bad outcomes.


The unfortunate problem is that all real problems[1] are not amenable to "one direct piece of easily actionable advice" other than "take two aspirin and let me know when it gets worse". Their nuances have nuances, and the only absolutely true statement you can make is, "if you have a straightforward, easy solution to the problem, it is wrong."

[1] I originally thought "some", then "many", then "most", but all of those words underplay the situation.


Are you advocating for a listicle?


the road to hell is paved with good intentions.


The medical field has a term for this: iatrogenic.


The moral seems to be "Don't do anything, because there are always too many risks, many of which aren't obvious." If we followed this, nothing would ever get invented or done, and we'd still be hunter gatherers, so I'm not really feeling it.


Naturally a pessimist on these issues. Listening to graduation speeches on how we're gonna change the world, as if it were this one weird tip discovered by a student. (Philosophers hate him!)

Bleh. You'll change the world alright. Likely, not for the better.


That is not a good favicon, I was skimming for a list of '8 ways people trying to do good'.


> Ways people trying to do good accidentally do harm instead and how to avoid them

Like Facebook engineers?


To do good requires understanding the issue at hand and devising a measurable plan for improvement.

This is not what many, if not most people do, or want to do, when they "try to do good". They simply want to do something based on what _they_ see as 'good'.


Socialism. /thread


You’ll never get anything done if you give in to this sort of analysis paralysis.


They are just some factors you may want to take into account. Usually things like this have some clear cut answers. If these questions require real thought and potential paralysis then that's a good thing. If you are paralysed by the fact you aren't sure what you are doing may actually do more harm than good that's good.


Sometimes getting nothing done is the best outcome.


Rare, for the individual. Common, for congress


Thanks. You use the word 'field' 54 times and 'risk' 33 times in the article but never explain what you mean by them. I found this confusing. E.g. at one point you say that 'reducing extinction risk' is a field, but then later say that not every risk is pressing in every field. Is 'reducing extinction risk' both a field and a risk?


field: A particular field is a particular subject of study or type of activity.

So the authors are using 'field' to mean area of study which has standard usage in English I think so probably didn't need to be explained further.

When the authors wrote: "Nonetheless we think cataloguing these risks is important if we’re going to be serious about having an impact in important but ‘fragile’ fields like reducing extinction risk."

They are talking about 'reducing extinction risk' being a field of study. And the risks of doing damage to a that new field with early research.


Thanks

> They are talking about 'reducing extinction risk' being a field of study. And the risks of doing damage to a that new field with early research.

Then this is an unfortunately named field to introduce first, given that the article is about 'risks' which are not 'fields'


'risk X' is a risk, and 'reducing risk X' is a field.

There is the (meta-)risk that you harm a field (the 'harming field Y' risk).

They're studying that (meta-)risk here, in an attempt to reduce it. The article is thus squarely in the 'reducing risk Z (of harming field Y (of reducing risk X)))' field.

Your misunderstanding of it poses certain risks, that this comment attempts to reduce, with a concomitant risk of failure, but that gets a bit complicated.


Ha :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: