Hacker News new | past | comments | ask | show | jobs | submit login

I used to think that this fear was driven by rational minds until I read Michael Lewis' "Going Infinite" and learned more about effective altruism [EA].

Michael Lewis writes that what is remarkable about these guys [i.e. EAs] is that they're willing to follow their beliefs to their logical conclusion (paraphrasing) without regard to social cost/consequences or just inconvenience of it all. In fact, in my estimation this is just the definition of religious fundamentalism, and gives us a new lens with which to understand EA and the well funded brain children of the movement.

Every effective religion needs a doomsday scenario, or some 'second coming' apocalyptic like scenario (not sure why, just seems to be a pattern). I think all this fear mongering around AI is just that - it's based on irrational belief at the end of the day - its humanism rewashed into tech 'rationalism' (which was originally just washed from Christianity et. al.)




My comment above, which I think you’re replying to, is not based in fear. It’s based on first-hand experience experimenting with the likes of AutoExpert, AutoGen and ChatDev. The tools underlying these projects are quite close to doing, in a very short amount of time and cost, what it takes human knowledge workers a long time (and hence cost) to do. I think it’s as close as Summer 24’ that we get simplified GenAI grounding. Once hallucinations are grounded and there are some cleaner ways to implement GenAI workflows and pipelines…it won’t be long until you see droves of knowledge workers looking for jobs. Or if not, they’ll be creating the workflows that replace the bulk of their work, hence we’ll get that “I only want to work 20hrs a week” reality.


I was responding to the initial question actually (https://news.ycombinator.com/item?id=38113521). I appreciate the insight though - looking forward to checking out the Gibson book.

Still, I'm not sure that I see the 'AI jumping out of the box scenario' any time soon (or ever). Chat apps finding missile codes, or convincing your smart toaster to jump in your bubble bath while you're inside seem only as real as a man in a box with intentions is (and unfortunately humans can kill from inside and out of boxes).

I'm definitely concerned about the implications to social welfare, the social bias involved in systems that make big decisions about an individuals freedom, etc, but these leaps from super impressive automated text analysis to humanity doomsday scenario seem like fear mongering to me mostly because these are scenarios that already exist today (along with massive problems in general social welfare).

The scenarios that don't exist (like Nick Bostrom's "An objective function that turns the world into a paper clip factory in "Superintelligence: Paths, Dangers, Strategies") strike me as easy to fix by just standing by the power outlet so things don't get out of hand. There are a lot of risks that don't exist yet. Alien contact is one of them - never happened, but it could happen, and if it does it could wipe us out - so be afraid and do something noble for the cause. This to me feels like a very rational response to what is essentially a 'magical' prior. We're scared now because we 'feel' close to General AI, but we really have no way of quantifying how close we are to it, and how dangerous (if at all) it would actually be. I'm definitely open to being wrong, but its hard not to agree with LeCun on a level.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: