Hacker News new | past | comments | ask | show | jobs | submit login

Right! My point is that there are good aspects to the Effective Altruism movement, but historically they've done a disservice by also embracing some fringe stuff under the Effective Altruism banner.

Like I said in my comment, I'm hoping this blog post is a signal that they're going to start maturing the organization a bit more and hopefully distance themselves from some of the weirdness that tries to ride the coattails of the Effective Altruism movement.

Go ahead and spin up all of the AI think tank and Harry Potter fan fiction distribution under a different name, but forcing it under the Effective Altruism banner only drives away donations from people who go into this expecting an organization focused on doing charitable acts in the real world, not funding additional think tanks.




Why aren't EA organizations allowed to be concerned about AI safety? Sure, there's nothing dangerous yet, but surely the trend towards more capable AI has become fairly clear these past few years, and at some point it will get concerning even if only in the wrong hands, and it would be better to be prepared for that beforehand?


Well is it effective? There are a lot of problems that exist right now that could be meaningfully worked on with any amount of money at all.

Not that all organizations need to work on immediate practical problems. But I think all organizations that are named things like "effective altruism" should.

And do they work against any of the actual current problems with applied "AI?" Or is it all just singularitarianist eschatology still?


Check out the current work of Stuart Russell. There's some legit work being done to figure out alignment problems that aren't attempts to solve all human values.

EA basically splits into extremely hard-nosed short term projects and much riskier, but potentially extremely valuable projects guaranteeing the long term future of humanity. This is a consequence of those extremes being the most neglected.


Exactly. Even in the short term, there are some very worrying behaviours in AI systems which have real consequences. For instance:

* Some researchers recently built an AI system to generate low-toxicity molecules as candidates for medicines. They realised if they changed their system to maximise toxicity, it designed thousands of toxic molecule candidates including the VX nerve agent (https://www.nature.com/articles/s42256-022-00465-9)

* When GitHub released their Copilot AI code completion tool, it could autocomplete things like `API_KEY: `

* AI models used for decision-making in hospitals and courts frequently exhibit extreme racial prejudice (eg https://www.nature.com/articles/d41586-019-03228-6)

* and many many, other examples

There's a real sense of utopianism and not much consideration of misuse even on a very short timescale. Even if you don't think AI will become an existential risk in the future, there are enough problems caused by misuse of current systems that it warrants attention.

All of the money I've seen flowing in the AI safety EA space has been put to extremely good use - $30k to a YouTuber making videos on AI safety who introduced me to the field and is my go-to for explaining topics like alignment (what you want vs what you say you want), grants to extremely productive AI safety researchers, grants to fund educational bootcamps and scholarships, etc.

(Disclaimer: I'm an EA aligned person myself so I'm fairly biased!)


“Allowed” isn’t language that GP is using; you’re introducing that.

In fact, GP is suggesting those factions do continue to raise, under a different banner.

GP’s concern, which I share, is that there’s a greater deal of effectiveness that’s being impacted by outside perceptions.

EA should inherently be concerned with making the top of their altruism dollar funnel as wide as possible, to channel the most good most effectively.

If the potential non-AI-safety money being left on the table is a greater amount than the money raised for AI-safety causes, is that most effective? For whom?

In a world where that potential money enters the funnel, and AI-safety has and grows a separate funnel of its own, how is that not better?


The whole point of effective altruism is that it operated charity like a business, helping the most amount of people for the cheapest amount possible. The benefits of funding nebulous AI safety charities are very unclear


Your take (that they are driving away donors by giving to causes you think are silly) would seem to be contradicted by the content of the post (that they have raised so much money that they need new models for how to distribute it).


Both could be true simultaneously. The relevant question would be: how much more money is not getting raised?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: