Hacker News new | past | comments | ask | show | jobs | submit login
Effective altruism and the current funding situation (effectivealtruism.org)
114 points by kvee on May 10, 2022 | hide | past | favorite | 112 comments



I really hope this serves as an inflection point for the Effective Altruism crowd to pivot away from some of the weirder things they embraced in the past.

I love the idea of Effective Altruism and what they're trying to accomplish, but in practice it frequently turns into funding for intellectuals to sit around and pontificate about AI risk and other things, as opposed to actually going out into the world and doing altruistic acts.

Some of their previous grants are downright laughable, like spending $30K to distribute copies of rationalist Harry Potter fanfiction (Harry Potter and the Methods of Rationality) despite the fact that it's freely available online. (Source: https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/... Ctrl+F "Harry Potter")

The EA movement has become a bit of a joke in many circles after making too many moves like this, so hopefully this serves as a wake-up call for them to start getting more practical.


For whatever it's worth, HPMOR introduced me to Effective Altruism and lesswrong, etc.

Though I found it online and not from the $30K distributed, I have since as a result of being introduced to Effective Altruism there donated much more than $30K to highly effective charities and caused hundreds of other people to get into donating to highly effective charities too. I also started a company inspired by the EA concepts of working on "neglected problems" - https://80000hours.org/articles/problem-framework/#definitio...

I might be wrong, but I suspect there's a decent chance there was at least 1 person like me of the people in that $30k distribution.


Yep, exactly — and the average EA donates way more than $30,000 over their lifetime.

This is just a creative way to advertise, and I don’t see what’s wrong with charities trying to get their message out there.


When you put it that way, sure.

I think GGP may have been going for a different point though, which is - is EA getting an overly harmful outward reputation / giving the wrong impression to laypeople and others, in a way that inhibits the overall potential reach of the movement? Is the creative advertising causing more harm than good?

At least, that’s the question I’m asking after reading these three posts.

It seems like it’s a common theme among rationalist groups to take a strongly inside view of the situation, based on all the observable facts and known good being done, and totally ignore the bigger picture outside view.


Anecdotally, a ton of my EA friends have gotten introduced to EA through HPMOR. It's not as wild as a marketing strategy as OP makes it out to be.


There are multiple EA groups, but I think the most common place to donate is GiveWell? Here's what its donation page looks like:

https://secure.givewell.org/

    Let GiveWell direct your donation
      GiveWell's Maximum Impact Fund

    Support GiveWell's top charities
      Malaria Consortium's seasonal malaria chemoprevention program
      Against Malaria Foundation
      Helen Keller International's vitamin A supplementation program
      SCI Foundation (Schistosomiasis Control Initiative)
      Sightsavers' deworming program
      New Incentives
      Evidence Action's Deworm the World Initiative
      END Fund's deworming program
      GiveDirectly
The recent grants by their Maximum Impact Fund are listed here:

https://www.givewell.org/maximum-impact-fund

the largest of which was $7.8 million to "Sightsavers — Deworming in Nigeria and Cameroon (February 2022)"

Please don't let the fact that some people are concerned about AI safety detract from EA's overarching goal of doing the most good per dollar donated.


Right! My point is that there are good aspects to the Effective Altruism movement, but historically they've done a disservice by also embracing some fringe stuff under the Effective Altruism banner.

Like I said in my comment, I'm hoping this blog post is a signal that they're going to start maturing the organization a bit more and hopefully distance themselves from some of the weirdness that tries to ride the coattails of the Effective Altruism movement.

Go ahead and spin up all of the AI think tank and Harry Potter fan fiction distribution under a different name, but forcing it under the Effective Altruism banner only drives away donations from people who go into this expecting an organization focused on doing charitable acts in the real world, not funding additional think tanks.


Why aren't EA organizations allowed to be concerned about AI safety? Sure, there's nothing dangerous yet, but surely the trend towards more capable AI has become fairly clear these past few years, and at some point it will get concerning even if only in the wrong hands, and it would be better to be prepared for that beforehand?


Well is it effective? There are a lot of problems that exist right now that could be meaningfully worked on with any amount of money at all.

Not that all organizations need to work on immediate practical problems. But I think all organizations that are named things like "effective altruism" should.

And do they work against any of the actual current problems with applied "AI?" Or is it all just singularitarianist eschatology still?


Check out the current work of Stuart Russell. There's some legit work being done to figure out alignment problems that aren't attempts to solve all human values.

EA basically splits into extremely hard-nosed short term projects and much riskier, but potentially extremely valuable projects guaranteeing the long term future of humanity. This is a consequence of those extremes being the most neglected.


Exactly. Even in the short term, there are some very worrying behaviours in AI systems which have real consequences. For instance:

* Some researchers recently built an AI system to generate low-toxicity molecules as candidates for medicines. They realised if they changed their system to maximise toxicity, it designed thousands of toxic molecule candidates including the VX nerve agent (https://www.nature.com/articles/s42256-022-00465-9)

* When GitHub released their Copilot AI code completion tool, it could autocomplete things like `API_KEY: `

* AI models used for decision-making in hospitals and courts frequently exhibit extreme racial prejudice (eg https://www.nature.com/articles/d41586-019-03228-6)

* and many many, other examples

There's a real sense of utopianism and not much consideration of misuse even on a very short timescale. Even if you don't think AI will become an existential risk in the future, there are enough problems caused by misuse of current systems that it warrants attention.

All of the money I've seen flowing in the AI safety EA space has been put to extremely good use - $30k to a YouTuber making videos on AI safety who introduced me to the field and is my go-to for explaining topics like alignment (what you want vs what you say you want), grants to extremely productive AI safety researchers, grants to fund educational bootcamps and scholarships, etc.

(Disclaimer: I'm an EA aligned person myself so I'm fairly biased!)


“Allowed” isn’t language that GP is using; you’re introducing that.

In fact, GP is suggesting those factions do continue to raise, under a different banner.

GP’s concern, which I share, is that there’s a greater deal of effectiveness that’s being impacted by outside perceptions.

EA should inherently be concerned with making the top of their altruism dollar funnel as wide as possible, to channel the most good most effectively.

If the potential non-AI-safety money being left on the table is a greater amount than the money raised for AI-safety causes, is that most effective? For whom?

In a world where that potential money enters the funnel, and AI-safety has and grows a separate funnel of its own, how is that not better?


The whole point of effective altruism is that it operated charity like a business, helping the most amount of people for the cheapest amount possible. The benefits of funding nebulous AI safety charities are very unclear


Your take (that they are driving away donors by giving to causes you think are silly) would seem to be contradicted by the content of the post (that they have raised so much money that they need new models for how to distribute it).


Both could be true simultaneously. The relevant question would be: how much more money is not getting raised?


When I bring up poverty alleviation at EA meetups I feel like the response I get is "yeah that's definitely super important, we all donate to effective charities, but it's not fun to talk about as AI risk which makes for fun debates".

I feel like EAs see global poverty almost as a solved problem when there is a lot of work to be done still.


You left out that they were distributing the book to participants in math and science competitions, and that their reasoning was that they think the book is effective at teaching scientific reasoning. Donating science books to promising students sounds a lot better than distributing fanfiction.

You may disagree that HPMOR is effective at teaching science, but at least they did it openly. That page has a detailed discussion of the pros and cons of this grant, including the possibility that it could cause "reputational risk" (i.e. people like you making fun of them for it). AFAICT they took that seriously, and decided to do it anyway because they really do believe that it's effective at teaching science.


Ah, but is it the most effective book? In the absence of very real countervailing evidence, choosing that book would be a pretty clear-cut case of systematic bias.


It is a very fun way to learn about science. It was written with the goal to introduce science / rationality ideas through fiction and I would say it does a pretty good job of it.

I think the snark comes from "30000$" being spent to distribute a "free" book. If you think about it in terms of spending 46$ per person to advertise to a very select group of 650 people it might not sound so stupid.


It is a fun book but I'm really not sure what "science" it teaches.


I said it teaches about science.

From the very first chapter :

> "Mum," Harry said. "If you want to win this argument with Dad, look in chapter two of the first book of the Feynman Lectures on Physics. There's a quote there about how philosophers say a great deal about what science absolutely requires, and it is all wrong, because the only rule in science is that the final arbiter is observation - that you just have to look at the world and report what you see.

Chapter 8 introduces how to think about peculiar phenomenons through children games and so on and so forth.


Why? I think it’s perfectly reasonable to think that distributing books about effective altruism would be a very effective use for $30,000 of donations. Most EAs give 10% of their income; if you assume the average EA will make $100,000 a year over a 40 year career (pretty reasonable for people at the IMO), this book only needs 1 of those people to become an EA for this to break even. (In fact, it doesn’t even need one — it just needs to be a 1% chance of producing a single EA!)

My Fermi estimate is probably off, but it’s a pretty sensible argument to me.


Does it make sense given that the book is freely available already?


Those participants are going to mostly become mathematics academics which doesn't seem like the best demographic income-wise of you are trying to get a good return on investment? Especially as HPMOR doesn't directly lead you to the EA movement. I read it and it had some interesting ideas but it didn't make me realise EA existed.


Presumably the idea was that the distribution/content of the book promotes rational thinking, and that rational thinking primes you for EA. And then rather than become a mathematics academic, you decide to become e.g. a quantitative trader and donate lots of money to effective causes, or take some other high-impact career decision.


It's amusing that people still complain about AI ethics and debiasing (aka "alignment") being an early focus of EA's long after it's become an increasingly relevant research field, even with controversy regularly making the tech news. If anything, that AI focus counts as a success story for effective altruism, as much as the similar case of pandemic preparedness.


One thing I've wondered is when Effective Altruists are going to stop calling AI ethics neglected. AI ethics as a field is now mainstream, in my view, yet people still keep calling it neglected. I personally think Effective Altruists should focus less on AI ethics now as it has gained mainstream attention and focus more on other currently neglected topics.


By neglected, they mean that, instead of dealing with real ethics of AI in practice and the future, they want others to instead entertain their fantasies about scary overlord general AI as if it's just around the corner and coming to get them. They're upset that people aren't as spooked about science fiction as they are.


I think it's telling that the people most critical of AI alignment as cause for concern so rarely engage substantively with the very reasonable case for taking it seriously.


I read GP trying to draw a distinction between “real ethics of AI in practice” and “fantasies about scary overlord general AI.”

The former sounds like it would be full of reasonable cases for taking it seriously.


I know you don't appreciate now how poorly this comment will age, but I at least hope you remember to reflect back on it sometime.


I'll have all the time in the world to reflect on it when the singularity happens, as I'm tortured infinitely by the Basilisk I called science fiction.


> One thing I've wondered is when Effective Altruists are going to stop calling AI ethics neglected.

AI ethics isn’t neglected because AI ethics doesn’t matter. AI alignment matters. If a self-bootstrapping generally intelligent AI emerges the only important thing about it is “Is this aligned with human values?”. If the answer is no we’re all going to die because we’re made of atoms at the bottom of a gravity well and it’s going to use all the atoms there before getting out. The ethics of AI is completely irrelevant in comparison.


I tend to use the terms "AI ethics", "AI safety", and "AI alignment" interchangeably. This may not be technically correct.


It’s not; they’re pretty much completely unrelated fields. AI ethics focuses very little on AI alignment issues, which tends to worry about much bigger and more general problems.

The fact that you’re grouping AI alignment with is kind of an indication of the problem; most people have heard so little about AI alignment problems that they assume it’s the same thing as AI ethics.


> they’re pretty much completely unrelated fields

Completely unrelated? Even you must accept that they are related through AI.

But, I now see that AI ethics and AI alignment are different. Thank you. However, I think my larger point still stands as I was thinking mostly about efforts under the banners of "AI safety" and "AI alignment" when I wrote my comments here. I do not believe these efforts are "neglected" in the sense 80,000 Hours uses.


A miniscule fraction of global GDP going into preventing misaligned AI would always seem like neglect if you thought the end of the world was at stake.


> AI ethics as a field is now mainstream

Let me guess, you work in tech and in a city? I personally believe that if a super intelligent AI were to be created, then it would have the same effect on society as the advent of nuclear weapons. So I respectfully disagree that it's not neglected, I don't think the common American thinks about how close we are to human level intelligence or what that would mean for society.


When I say AI safety is not neglected, I do not mean AI safety is not important. I mean that it is not neglected in the sense that increasing the amount of money and people directed at the problem is not likely to help. That's a common definition among effective altruists. (And it's part of why 80,000 Hours says that nuclear security, the example you gave, is not particularly neglected.) I meant that AI safety is mainstream in the sense that it appears in popular journalism and has a large following, nothing more. I did not mean that most people would be aware of the problem.

Also: I'm not a professional programmer, though programming has been one of my job responsibilities in the past, but never all of my job. And I live in a rural area at the moment. Not that any of these are relevant.


By all available evidence it's horribly neglected at Big Tech firms, we keep seeing examples of AI researchers there not being taken seriously despite the quality of their work.


> we keep seeing examples of AI researchers there not being taken seriously despite the quality of their work

This phenomena is not unique to AI safety. It often is driven by management incentives.

I agree that AI safety is not a solved problem in practice, but that doesn't mean it's neglected. AI safety gets a lot of attention, and it is important, but it's not "neglected" in the sense that too few people work on it or that there isn't enough money in it [0]. I think the marginal impact of a new person in AI safety is roughly zero, all else equal. AI safety folks would probably do best to change their priorities away from "ivory tower" sort of issues towards the practical issues you bring up.

[0] EAs typically use people or money to measure neglectedness. https://80000hours.org/articles/problem-framework/#how-to-as...


What do you think are currently neglected topics?


To give just one example, I think Effective Altruists focus far too little on meta-science. Much of what they want to do depends on meta-science, and in science it's quite difficult to fund that sort of work, so it seems odd to me that it's not considered a top priority on 80,000 Hours.

I'm sure there are other areas, but I haven't put an effort into listing them. Global priorities work is pretty hard.

Also, apparently 80,000 Hours now considers AI safety only "somewhat neglected", so perhaps the EAs behind 80,000 Hours agree with me more than I thought. https://80000hours.org/problem-profiles/positively-shaping-a...


- Buying rainforest land and paying locals to guard it. - Greening energy production in developing countries. Their electrical grids are already unstable, so even improving energy just when the sun is shining could help. - Research into oceanic biologic collapse. - Finding and destroying illegal fishing operations. (Heck, make an AI boat do it, and you'll drum up even more support for AI alignment).


The first point is exactly what I donated to for a few years based on an EA recommendation 10 years ago!


AI alignment research is still extremely neglected. There’s a handful of researches looking at it and that’s about it. There’s plenty of coverage/criticism about AI, but it tends to be very different than the kinds of things EAs worry about.

The only place I know of alignment actually being covered in normal media is Vox’s Future Perfect (and by Matt Yglesias, who used to work there) and that’s because EAs literally pay them to cover it.


Apologies if this is curt, but if you think one of the most impactful altruism movements in the world with major focuses in global health and long term outcomes is a joke because they do outreach in odd (but clearly effective) ways, you have missed the point of effective altruism altogether.

You don't get brownie points for how steadfastly you stick to your Overton window. You get brownie points for saving lives.


> You don't get brownie points for how steadfastly you stick to your Overton window. You get brownie points for saving lives.

...by distributing copies of rationalist Harry Potter fan fiction?

I agree that charities that save lives are doing good things. That's why I brought up the fact that some of their spending has historically been kind of ludicrously off base.


You know Maus[0]? It's a comic book about Disney animals. Isn't that just laughable? Wouldn't teaching such a thing in schools--with the aim of imparting serious lessons--be "ludicrously off base"?

Derision for fan fiction and/or Harry Potter can't be the beginning and end of your argument for why HPMOR isn't a good book to introduce nerdy school kids to a handful of valuable ideas.

0. https://en.wikipedia.org/wiki/Maus


It sounds less stupid when you count the fraction of EA funding and activity that HPMOR was already in the causal chain of. It's not how I'd do outreach, but it's not like the choice was arbitrary.


Honestly curious, how would you do outreach?


I don't know, sorry.


So you're literally basing your entire argument to disqualify a movement with clear and public data about the positive impact that has created on a single $30k grant?


> Some of their previous grants are downright laughable, like spending $30K to distribute copies of rationalist Harry Potter fanfiction (Harry Potter and the Methods of Rationality) despite the fact that it's freely available online. (Source: https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/... Ctrl+F "Harry Potter")

Is this how money is laundered in non profits?


They're not allowed to make money from this Harry Potter fan-fiction IIRC, which this is (one that teaches effective altruism concepts and rationality), so I don't see how this would be a way to launder money.

It's also worth noting that this was 28k out of 923k USD, so not a too big part of it (~2%). Of course the money should be used for other things if it's estimated to be more effective though.

The reasoning is in the post you linked which I think gives a pretty good explanation. They're given to 650 math medalists (so ~43 USD/person) which they deemed to have good target audience for this.


I very much hope it doesn't

EA has done a great job bringing attention and resources to causes that are clear wins - it's just obvious that global poverty interventions like Against Malaria and GiveDirectly are highly beneficial.

But effectiveness doesn't always doing the most obviously effective thing. It means doing the highest expected value thing. Sometimes that means trying things with low expectation of success, but a high potential payoff.

There's a trade off between exploring the space of potential interventions, and exploiting known good ones. EA needs to keep its healthy focus on exploration, or risk getting stuck in a local minimum. Suppose the X-risk people are right - if we get that wrong there won't be any more people left to help. Certainly EA was sounding the alarm about lack of pandemic preparedness ages ago, and while that wasn't existential, it was the right direction to be looking in. Imagine if we'd had a bit better foresight, been a bit better prepared, and been able to prevent 6 million people dying of COVID.

Something sounding "weird" isn't sufficient reason to discard it. There needs to be an actual convincing argument. And let me tell you, the weird ideas have survived some pretty rigorous debate.


A problem is that the "expected value" can easily be inflated via hypothetical scenariosto turn your pet interest into THE ONLY THING ANYONE SHOULD EVER CARE ABOUT AT ALL!!!

Among the "rationalist" crowd, this is commonly done for both AI alignment and longevity.


These kinds of things have been pondered in EA circles, lest you think that's a unique gotcha.

https://en.wikipedia.org/wiki/Utility_monster

And they're not because EA is tackling a diverse set of issues, not just AI alignment and longevity, as the top spot of donations is going to malaria nets in Africa.


I just don't think this is a problem in practice. Sure, some people advocate spending all your charitable dollars on X-risk, but I think most people find a happy medium of spreading their giving between the different causes they think are important.


Philanthropists spend billions of dollar each year on the "arts", with some vague idea of enlightening people, but they're the crazy ones for buying 650 most likely high impact individuals a book explaining their thought processes?


> despite the fact that it's freely available online.

The poor frequently do not have printers or significant access to consistently functional digital devices.


> The poor frequently do not have printers or significant access to consistently functional digital devices.

If a group of people is so poor that they don't have access to digital devices, maybe printing out rationalist Harry Potter fan fiction shouldn't be at the top of the priority list for a charity trying to spend money to improve their lives.


I think teaching rationalism with approachable and interesting materials is fine and reasonable.

I suppose we could be equally upset that Sesame street uses puppets?


They gave copies to the 650 people who advanced far enough in IMO and EGMO (International Math Olympiad / European Girls’ Mathematical Olympiad) to be considered "medalists". It's very unlikely that those people don't have internet access.


In the US and most civilized nations they have public library access, which can easily be used to access Harry Potter fanfiction. It's not clear how they benefit from a printed copy of the Harry Potter fanfiction.


The main benefit is that they’re probably more likely to actually read it. I dunno about you, but if someone gave me a book I’d probably actually read it. I definitely wouldn’t read a PDF someone emailed to me.


And how do you get to the library?


How did they get to the IMO?


Were the books actually given out to the poor?


No, but imagine if they were.


My favorite grant from that link was the $20k one for someone to learn to ride a bike and think about AI.


You’re saying that the existence of a book being freely available online means that it’s always pointless to give out physical copies? Do you make the same criticism of eg Dolly Parton’s program to give books to families for free, in cases where the copyright on the book has expired?


I guess the universe is allowed to be crazy.

The universe is allowed to decide the Donald Trump will be president in 2016 despite all the reasons to think it was crazy that he would become president. The universe is allowed to decide that Volodymyr Zelenskyy will be elected president of Ukraine on the basis of having starring as the president in a comedy.

And I guess the universe is allowed to decide that a piece of fanfic that is desperately in need of an editor will be successful at recruiting talent for the rationalist or AI Safety communities.

I expect you are probably skeptical of AI Safety, but then your criticism would be a criticism of the final objective, not the method (distribution of HPMoR) used to achieve the objective.


It seems that a lot of people here think that EA focus on AGI is overblown because they personally think that AI risk is overblown. It's certainly debatable if a human level or greater AI will be created in the next 50-100 years.

However, if you believe that AGI will be created, then you have to admit that societal impacts will be tremendous, the same level as the advent of nukes.

People are pointing out the problems in discrimination that we face with current sub human AI. Imagine AI systems smarter than 90% of humans or smarter than all humans. Wouldn't that make all current problems in AI minuscule if we couldn't control such a program?


Reading that convinced me that giving Harry Potter fanfic to Russian math champions is a decent idea, but if failed to explain why it would cost $30k.


IIRC they quoted $43/book, and to me that sounds right in the ballpark of what I would expect it to cost for printing and shipping a low-quantity issue of a large book.


I couldn't agree more. I used to love EA, but when it became a way to justify that a bunch of nerds (speaking as one) are _actually_ doing the most important work humanity ever saw by learning about AI... it just put a sour taste in my mouth. I wish they'd focus more on global health initiatives.


Spending 30k promoting hpmor sounds like a great investment if done well.


[flagged]


> wasn't one of their initial proposals that deworming medications should be sent to rural Africa

Yes, and it's heavily funded by EA givers (yours truly included). People like to harp on the weird edge-cases, but Givewell.org is directing hundreds of millions of dollars per year[0], and it's pretty much entirely things deworming, malaria prevention, etc.[1]

If you visit effectivealtruism.org you'll see it's not a bunch of articles about AI research. It's about convincing people to donate in ways that have the most positive impact, and includes a big "What has the effective altruism community done?" section that's mostly the standard "ensure everyone has food, water, basic healthcare, etc." The key is that there's an emphasis on making sure you're actually improving those metrics as much as possible, rather than whatever has the flashiest marketing.

There are a bunch of people talking about AI safety and other weirder things, but we mostly just look at research to figure out what improves lives and do that.

[0] https://www.givewell.org/about/impact

[1] https://www.givewell.org/charities/top-charities


> other than getting people to donate

If that's the most effective way of helping people, why would they not do that? They do more than that though, they fund projects of many different kinds. They also coach people to go into fields where they can have a positive impact for example.

> wasn't one of their initial proposals that deworming medications should be sent to rural Africa, for its disproportionate impact it could have?

They have different funds:

- Global Health and Development Fund [0]

- Animal Welfare Fund [1]

- Long-Term Future Fund[2]

- Effective Altruism Infrastructure Fund[3]

Deworming would go under "Global health and development" (if it's estimated to be the most effective) and AI under "Long-term future fund".

---

0. https://funds.effectivealtruism.org/funds/global-development

1. https://funds.effectivealtruism.org/funds/animal-welfare

2. https://funds.effectivealtruism.org/funds/far-future

3. https://funds.effectivealtruism.org/funds/ea-community


Yeah, and there’s also GiveWell. Probably 80% of EA money goes to development, with the rest evenly split between the other three cause areas.

I’d be very shocked if AI research got more than 5% of EA funding.


Per GiveWell's website, around 3.7 million has gone to LTFF since 2020, 6.5 to the animal welfare fund, and 14ish to GH&D. So it's more like 15%, and the majority of the LTFF funding is what I'd describe as "self-dealing", in that it's almost all money to fund members of the EA/longtermist community for thinking about things.

(I should note, I donate a decent sum to GiveWell each year, though only to the Global Development fund, and it deeply irks me that they don't put some of their effort into innovative ideas to combat climate change)


> they're actually doing is researching the dangers of AI and other neckbeard concerns that have nothing to do with the problems and sufferings of actual human beings today.

This is really the kind of comment I'd like not to see on HN.

"Neckbeard concern" implies that... what, people worrying about AI risk are worrying for fun? Because they're socially awkward nerds disconnected from reality?

This reminds me of https://xkcd.com/743/

Worrying about the future consequences of incoming technological developments always looks like "neckbeard concerns" until it's too late to do anything about it.

And thus he we are, worrying about climate change and supply chain sovereignty and corporate control of information and wondering why nobody took sensible measures 20 years ago.

> And there's no mention of anything this org has actually done to help anyone, other than getting people to donate

Note that this is an internal post from an individual suggesting organizational changes, not an official public-facing communication.

As sibling comments pointed out, they do have posts listing success stories.


My concern about effectiveness about the "AI safety" stuff is that it kind of sidesteps the whole effectiveness analysis: if you're nominally working on risks you claim to be existential or apocalyptic, then the work is almost automatically "effective" even if you don't do anything really, just because the risk is so big that an infinitesimal decrease amounts to massive impact.

I'm sure just because of the nature of the movement there's been lots of thinking about the likelihood of various future scenarios and accounting for this in less flippant terms than I've just described it, but it seems to me that this is a group of people who are skewed to think AI is more likely, more potentially dangerous, and more interesting to work on and fund.

Putting aside the likelihood of AGI materializing at all, it's not really clear to me that paying for a bunch of research on this necessarily affects the actual AI that would be produced anyway, as that's being done by different people with different interests and incentives. You see people discuss the numbers of people affected by something like malaria prevention versus eventual extinction at the hands of AI as a justification for the need for AI work, but less talk about how we actually know what kinds of things we can do to do malaria prevention and really not so much the AI stuff. Obviously you have to start somewhere but...

From an outsider's perspective, the whole area just kind of feels like a pit you can throw money into while pretending you're being rigorous about spending.


This is a perfectly reasonable concern/criticism and one that plenty of EAs make. Going off funding, which is overwhelmingly directed towards global health and development, it’s probably one that most of them agree with. It also aligns with my experience talking to EAs, who mostly don’t work in or donate to AI.

But this is a criticism about effectiveness made within an EA framework. It assumes the thing we want to do is maximize the amount of good we do with our resources, and provides rational arguments for why AI won’t do that.

The AI folks think their cause is the one that does the most good, and they have rational arguments for that position. That’s why they’re considered part of the EA movement (despite not fitting in with the original vision).

That also means we have to listen and provide counterarguments before we reject their position. What we definitely shouldn’t do is write them off as “neckbeards” just because they’re working in tech and have unusual concerns. That’s how you end up writing off some 1930s physicist worried about the existential risk of nuclear fission weapons as some “Weird neckbeard nerd.”


The concern that EAs have about AI alignment is that there is practically no satisfying theoretical framework right now for how to build "safe" AI, and that's kinda concerning as we approach human-emulating AI (at least in the realm of chatbots, art, writing...)

There are hotfixes people add on for edge cases like "don't label people as gorillas" or "don't say certain words that Nazis say", but for problems like "don't ever initiate a conversation likely to cause someone to commit suicide"... there's no foundational work (that's not super vague) that compels an AI to optimize for human success and life.

That's not great IMO, and we should probably dedicate as much money to it as we dedicate to, I dunno, potato chips ($65 billion a year in the US IIRC).


But what is "Effective Altruism"? Is it a charity? A movement? Both?

The link https://forum.effectivealtruism.org/posts/fd3iQRkmCKCCL289u/... circles back to itself, with no simple explanation.

The "intro" from the front page has 2800+ words and isn't clear either: https://www.effectivealtruism.org/articles/introduction-to-e...

That intro is itself prefaced with a paragraph in italics that recommends a whole "handbook" for a "more thorough introduction". I'm afraid to click on that for fear it leads to a whole library of volumes discussing general ideas about life, freedom, religion and other minor and trivial concepts.

Why can't organizations' blogs have a simple, clear and easily accessible "about us" link that describes what they do in one sentence.


> Effective altruism is about doing good better

> Effective altruism is a social movement and philosophy focused on maximising the good you can do in your career, projects, and other life decisions.

That's the 'what they do in one sentence' from the frontpage of https://www.effectivealtruism.org/


smh people in this thread using their own ignorance to hate on EA. There are valid criticisms of EA for sure. But let's not argue against straw people


The thing that worries me most about EA is how they consider that 2 to 4 degrees warming from climate change is not a big deal. "Bad but survivable" is how they describe it on their existential risks page. I think they're misreading the science a little. Unfortunately. I wrote to Toby Ord a while back to comment on this, and he very politely wrote back and fobbed me off with some story.


Global warming should increase the total amount of arable land and therefore increase agricultural output.

The harm comes from displacement of people from coastal cities, islands, and regions that become unsuitable for agriculture. Billions will try to move to Siberia & Canada. There will be conflict.


Can you share what it is you're looking at that makes the case for 2 to 4 degrees warming being an existential threat?


I will refer you to the latest IPCC summary report. https://www.ipcc.ch/srccl/chapter/summary-for-policymakers/


Right. Are you referring to this chart of risk levels at different levels of temperature rise?

https://www.ipcc.ch/srccl/chapter/summary-for-policymakers/8...

Asking because, while all the risks laid out are quite serious, nothing in the language of the report (that I can find) mentions anything about the planet becoming unsurvivable altogether.

The language that does talk about impacted populations speaks in terms of hundreds of millions of people. For instance, section A.5.5 specifically connects number of human lives impacted under different levels of projected temperature rise.

My point is: climate change is a really big problem, and we should absolutely take action on these recommendations. Using language like “existential threat” is hyperbolic and, imo, harms the cause.


Thank you for the chart. I think that any discussion of the matter will be futile. If only because none of us - even the scientists - have enough information to make a prediction about scale, rate and impact of climate change over time. But the pattern we are seeing now indicates that things are moving faster than the climate scientists expected.

I feel the word 'existential' is appropriate (and by the way that word was used by EA not me) because for hundreds of millions of people - maybe more - food and water shortages, accompanied by disease may well have such an effect. When the scientists talk about human lives impacted under different levels, we know these are best estimates, and most likely deliberately cautious.

When the summary says 'Approximately 3.3 to 3.6 billion people live in contexts that are highly vulnerable to climate change (high confidence)', what does that mean? Highly vulnerable to me doesn't mean they'll just feel uncomfortable, but something more. Optimist vs pessimist perhaps?

Anyway, as I said, I am not expert enough to make any kind of predictions as to the future. All I do is watch as various predictions in the Reports are reached ahead of time, and it's very worrying. Because we haven't factored in so many things (because we can't). Will runaway catastrophic impacts occur, such as a massive uptick in methane emissions after sufficient permafrost melt? What will that mean? Will the food chain on which billions rely collapse because of ocean warming and increased acidity? Will the increasing unpredictability in weather patterns make mass market farming impossible to guarantee?

There's so much we don't know. This is uncharted territory really, isn't it?


Very disappointed at missing an AMAZING opportunity to say "shut up and multiply" about the funding


Flippant slogans are bad outreach.

Telling people "shut up and change your way of thinking until you agree with you" is a very bad thing to do once you're no longer a fringe movement, even if your way of thinking is right.


@dang I would suggest editing the title, changing EA to "Effective Altruism"

I'm also really curious about HN's view of Effective Altruism, and how do individuals here handle their charitable giving.


I'm non-religious but many religions have a norm of charitable giving which I'm glad EA is trying to introduce for secular people.

I'm not a pure utilitarian and try and get a bit of feelgood consumption out of my charity.

I have an automatic monthly deduction to a cause I'm interested in, https://www.orangutan.org.au/ which buys up rainforests and protect orangutans in Borneo.

The reason I go for this instead of "antimalarial bednets" (ie optimal human life saving) is because I have visited Borneo a few times, and think protecting great apes is important to me personally.

I also kick in a few hundred bucks a year to sponsor friends doing various charity challenges.


I personally care about rainforests and orangutans as well. But if I had a monthly donation going towards orangutans I would want to know what impact it had. How much of my money actually goes to successful habitat preservation, rather than e.g. the running costs of the organisation? Does it cost $5 million to intervene to save one orangutan, or $5,000? What does the evidence look like?

If I imagined two rooms, one with a child about to die of malaria in it, and another with an orangutan in it, and you told me I could spend $7,000 dollars to save the child or $50,000 to save the orangutan, I'd be very hard pressed to save the orangutan.

I think I would get the same 'feel good consumption' from more effective charitable giving, where my money went further and did more good in the world. The few thousand dollars needed to save a life from malaria would feel as good or better than tens of thousands of dollars needed to fail to save an orangutan.


And yet a consideration that, in my experience, rationalists ignore at their own expense is that human factors like familiarity weigh in to the thinking of the typical layperson. Many people might bristle at, and feel "told" or "admonished" by this line of argumentation.

I'm all for methods to encourage greater rationality. I'm all for a world where, someday, folks are doing this sort of calculus more often in their lives.

We're not there, yet. The approach you are taking here falls short in being a persuasive, and thus effective, argument. imo, more awareness and effort needs to go into the rational community itself trying to get more in touch with what sort of, well, let's call them "less rational", methods work effectively for/with/"on" the general populace.

I know, dark arts and all that. I'm not suggesting to outright lie to people. I am suggesting that falling into the typical mind fallacy is a sort of highly visible antipattern exhibited, in my experience, by rationalists. I'm suggesting that a world where a person helps save orangutans and rainforests and acts as a conscientious, charitable giver is better than the world where they don't. I'm suggesting to focus on that good, and then do work where you can to make the world even more better.

Perhaps a look into the works of David Chapman [1] could help, here. Perhaps a greater appreciation for the complexity and subjective reality of other humans' thought processes and rationalizations, to them, would help you make better use of empathy and prosociality to become even more effective.

[1] https://meaningness.com/deconstructing-yourself-6


You can find that info on their page eg https://www.orangutan.org.au/about-us/top/

Buying / leasing rainforest or lobbying government for protection seems pretty cheap in the grand scheme of things, 11k wild oranghutans for 3.3M dollars last year is less than $300 per oranghutan per year

There are 70,000 humans per oranghutan so even though they are more genetically distant from us rarity should count


I disagree with your choice of charity but I support your line of thinking and your choices. The fact that this charity has numbers puts it way ahead of the slush funds that do zero good (or negative good). I personally think that's how EA will go mainstream, by encouraging everyone to make more quantitative decisions that result in more funding to better charities.

This person is also straightforward with their reasoning - yeah sure it's not as effective as malaria nets but it's a personal cause they care about. Most people don't even think about how effective the causes they support are - at all! It's mindblowing.


The trade-off is often not between charity X vs the most optimal charity, but any charity vs further consumption/savings etc.

Imagine EA ideas applied to fitness - sure some people may stick to a perfect routine etc but some people just getting off the couch and doing an activity they like enough to keep doing is best.


Por que no los dos? Most of my donations go to things like AMF, but I do put some money towards the causes that my ape brain prefers.


> I'm also really curious about HN's view of Effective Altruism

I have never heard of it but judging from the comments it's well known? Now I feel like the perfect ignoramus.


I thought this was an article about Electronic Arts and a scoop about liquidity issues. Unfortuantely not the case...


Especially as they've just lost the FIFA licence, renaming the next football (soccer) games to "EA FC" as FIFA were asking for a billion for the licence to use their name.

https://www.theguardian.com/games/2022/may/10/electronic-art...


Yes, hahaha me too. I clicked happy because I thought "yes, EA a big devil company who no release any game for Linux in their history, thanks karma".

But not.


Actually, EA released the source code and binary Linux version the original SimCity under GPL-3, which I ported to various Unix platforms including Linux and the OLPC.

https://medium.com/@donhopkins/open-sourcing-simcity-58470a2...

>Open Sourcing SimCity, by Chaim Gingold. Excerpt from page 289–293 of “Play Design”, a dissertation submitted in partial satisfaction of the requirements for the degree of Doctor in Philosophy in Computer Science by Chaim Gingold.

Granted, EA's QA folks had never QA'ed a Linux game before, so I had to walk them through installing VMWare on Windows and gave them a Linux operating system image to test it with, but it did pass QA, and they released it in binary on the OLPC as well as in source code form.

https://github.com/SimHacker/micropolis

Here's the contract with all the legal details:

https://donhopkins.com/home/olpc-ea-contract.pdf


I figured it was either that or Early Access and its impact on development schedules, milestones, feedback, etc.


EA stands for Electronic Arts. in my planet's software field. for decades. was a weird shock to clickthrough on that article link. ;-)

"Kids... get off my lawn!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: