I feel weirdly (perhaps irrationally) about the effective altruism people including 80,000 hours. I just can't shake the cult-y vibes they give off. Am I alone in this?
I'm pretty sure that it's the movement is a net positive for the world, because it attracts a different crowd (younger, STEM, etc) than altruistic endeavours traditionally do.
The basic drive to empirically evaluate one's actions is also sound, had been simmering in the "traditional" community for quite a while, and would have become more relevant in some way or another (eventually). If you look the output of GiveWell, to cite just one example, their deep dives on policies and charities make for fascinating reading.
Yet, of course, any such trend is bound to find people overdoing it, or cargo-culting it, or whatever. Just look at the split within the community to see this perfectly illustrated: there's one group that wants to focus exclusively on AI risks, because they assess it as having the potential to wipe out humanity (rendering the probability meaningless in their calculations).
And there is another group that wants to focus on animal suffering in the wild. Tadpoles, this group says, die in the billions each year, and the difference of their nervous system from humans' is just a question of degree.
> And there is another group that wants to focus on animal suffering in the wild. Tadpoles, this group says, die in the billions each year, and the difference of their nervous system from humans' is just a question of degree.
I don't want to redirect all of human effort to reducing animal suffering, but I would sleep a lot better if I were sure these people are wrong.
I was a little skeptical at first as well but I initially got involved when doing a career quiz and then by helping out with the Estonian chapter of Effective Altruism and I'll say that at least on an individual basis they're quite open to other thoughts and opinions and acknowledge that what they believe doesn't necessarily follow for everyone (ex. the idea that you can 'earn to give' - it only works if you have the personality type where you can do a job such as Ibanking and can satisfy your need to do good by giving and not actively participating as much)
I have the same feeling. It concerns me because I'm not sure how to differentiate between my subconscious conservatism and my instinctive backlash against cultishness and/or over-complication.
The only thing I can put my finger on is this: those organisations - and this article - present themselves as being above-average rational and insightful. At least that's my impression. Yet many of these suggestions seem to be speculation based on very small data, for example the point about them not having competitors. Many others could simply be rephrased as "take a holistic view", for example considering whether recruiting people to your company is good for the cause as a whole.
Don't get me wrong, there is some insight here, it's well-written, and some of these heuristics might turn out to be helpful to some people. I just think that the website it's published on is inflating its perceived significance.
The history of the movement started in the Oxford Uni philosophy department. I could be wrong but having read a couple of the early books, I think the focus started much more on the ideas of:
1: you have an equal moral obligation to do good for all humans as you do for humans around you, having one without the other is not really logically justifiable
2: some charities are orders of magnitude more effective than others.
I agree somewhat with your sentiment with regards to where they are now, with a lot of their focus on far-future risk avoidance. That stuff can't really fit, because it can't be reasoned about very precisely/scientifically. It's gone from a movement with emphasis on measurement and scientific methods to having a lot of emphasis on things that can't be measured. I think part of the problem is literally it started with some of the brightest guys around and as it gets disseminated to general pop, they lose and distort the message.
Ask yourself this, do you disagree with the basic premise of the article, that the law of unintended consequences applies to charitable endeavors? If not, do you disagree with the ways they say it can manifest itself?
Now, ask yourself, if you don't disagree, why did you write this?
I know from actual experience that a large number of people, when their actions do harm, fall back on "I was just trying to help." I wish more people read, and following, this kind of advice.
It should be clear from my post that I don't directly disagree with the premises of the article. I'm questioning its significance, novelty, effectiveness, and interest.
> why did you write this?
I was exploring a shared, unexplained feeling of discomfort. This is an intellectually and emotionally fulfilling thing to do.
> I know from actual experience that a large number of people, when their actions do harm, fall back on "I was just trying to help." I wish more people read, and following, this kind of advice.
In my experience those people are not the same types that would read this article, nor do they seem to be the target audience. The target audience appears to be leaders and employers in "fragile fields".
It wasn't clear. My theoretical questions were to get you to answer your own question, which you started off with:
> I have the same feeling. It concerns me because I'm not sure how to differentiate between my subconscious conservatism and my instinctive backlash against cultishness and/or over-complication.
Nope, same here. Feels a bit (a bit) like religion for atheists.
For example I don't dislike Nick Bostrom but I feel like he's being quoted so uncritically in this article - not totally unlike how a few years ago some HN commenters could quote Paul Graham essays as holy scripture.
I get that feel from anyone who cares passionately about something. Especially when that something has a deep philosophical core that challenges us to cut away our programmed/instinctive behaviors and critically analyze to find meaning in a universe that may not have any. Existentialism leads to ennui.
My bigger issue is the slight whiff of... dare I say it? Privilege?
Look, don’t get me wrong. Publicising good causes and helping people direct their charity can do no harm. I won’t criticise that. But instructing your readers to do economics PhDs, or join an investment bank, or take an internship at a thinktank, is assuming they enjoy some enormous opportunities.
Most people cannot do those things, not even if they went to Oxford. They cannot do an MBA or live on the subsistence wages of the modern academic. Like most human beings in the history of civilisation they can only choose how to be instruments of power and capital, which often means doing things only indirectly related to the public good.
They are aware of this, but focus on that small portion of the population anyway because the people that can go work at an investment bank provide a ridiculously disproportionate bang for their buck.
Tangentially, I think you're underestimating what a typical Oxford grad can do. Most of them absolutely can do one of those things, especially if they decide to before they graduate.
I went to Cambridge. No way could I have afforded an unpaid internship at a thinktank after I graduated.
Possibly I could do it now, but even if I obtained that golden policy job - how would I survive in London? Housing here is more expensive than even San Francisco.
My degree has opened many opportunities - sure. But I can't pay my rent with it.
No... I, for one, would really like to know what the actual (measurable, practical) effect of effective altruism is. I definitely hope that it's not just like, we do nothing but more effectively.
I likewise question the efficacy of efforts like eradicating malaria. I can't help but feel all that money would be better spent on technology (not startups, but like fusion and other energy research, nanotech, space industry etc), even if for-profit. Like, no mater how many Africans you can save from malaria, they will keep dying or living shitty lives because of poverty, famine, wars... Only technology can actually change the world (and politics, but we can't really seem to be able to do much about that...).
Malaria is one of the biggest issues preventing Africa as a continent from going forward. Where it’s worst, malaria contributes significantly to childhood brain damage, causing untold amounts of lost economic growth as generations after generation are stunted for life with the disease. 12 billion usd annually is spent on directly treating malaria patients, with many times that being lost due to long term damage. [0]
Because of this, I’m very skeptical of the idea that dealing with malaria isn’t one of the most effective use of resources in improving countless lives right now.
I feel like that pretty comprehensively answers your question. Givewell is a[n ea affiliated] charity which empirically evaluates the effectiveness of other charities, and aims to convince charitable givers to give to those they deem most effective. They've had a lot of success in this goal. Their top charities are listed and in depth reasoning for their evaluations are available.
Ultimately this can all be boiled down to the idea if working smarter not harder. Unfortunately mankinds definition of smart varies as as wildly as their actual intelligence
Not even. I think it's more about, what to work towards - i.e. the goal. I mean, feeding hungry people is definitely a worthwhile goal, but... you have to do it again, tomorrow. It's not a scalable solution.
What if feeding hungry people today is what it takes for them to feed themselves tomorrow? Africa currently has millions of entrepreneurs, and half the rare earth minerals in your computer's chips and a plurality of the beans for the daily cup of Starbucks you need to do your vastly more lucrative job with it come from there. Not only is the people of Africa not dying (or dying less fast) something that can pay dividends in the future, you directly benefit from it. (Most of the aid we send to Africa is not even remotely altruism, effective or not, although most of it is also not very effective.)
I think the goals of NGOs working in that space are generally in the right place. It's true that they seldom do anything scalable in the sense of transformational research or marked improvement in processes, but the structure (of funding) somewhat prevents them to. I think the most scalable thing we can do for poor countries right now is make information, education and food and essential medication as available and as cheap as possible for as much of their population as possible, and let these children's children save themselves.
The point I was trying to make is that the order of operations matters. But we can only judge the outcomes of our efforts based on their effects not their original intent
EA is, if you read between the lines, a program to maximize deaths from starvation over other causes. Also a form of money laundering, the number of tiers of EA organizations allocating money is always growing. The money goes to employees of charities increasingly.
"cataloguing these risks is important if we’re going to be serious about having an impact in important but ‘fragile’ fields like reducing extinction risk"
I certainly hope that I'm mistaken, but that sounds awfully a lot like a pro - eugenics bit of propaganda.
Personally, I would suggest that people with a clear intent toward 'love thy neighbor' rather than 'We're the best set of people to fix your problem for you' is best for humanity.
I can tell you that the "extinction risks" that EA people are likely to be concerned with are things like nuclear war, runaway AI, runaway bioweapons, catastrophic feedback loops related to global warming, etc. The "extinction" they're referring to is that of the entire human race.
If you thought they were referring to something else, I wonder where you got that idea.