Hacker News new | past | comments | ask | show | jobs | submit login

I've re-skimmed it recently as well, and found it to be extremely zeerusted and needlessly alarmist in retrospect. A lot of it is written from the perspective of "a handful of scientists build brain in a bunker a la Manhattan project" that is so far from our actual reality that 90% of the concerns don't even apply.

Exponential runaway turned out to not be a thing at all, progress is slow (on the order of years), competitors are aplenty, alignment is easy, everything is more or less done in the open with papers being published every day. We're basically living out the absolute best possible option out of all the ones outlined in the book.




Looks like the real-world risks of AI are, predictably, AI being used to avoid responsibility/liability/regulation or plainly copyright-laundering (which likewise predictably is only a temporary loophole until laws catch up) and companies like Google reversing all progress they made in reducing their emissions by doubling down on resource-intense AI.

"Avoiding regulation" as a Service of course has a huge market potential for as long as it works, just like it did for crypto and the gig economy. But it is by definition a bubble because it will deflate as soon as the regulations are fixed. GenAI might have an eventual use but it will in all likelihood look nothing like what it is used for at the moment.

And yeah, you could complain that what I said mostly applies to GenAI and LLMs but that's where the current hype is. Nobody talks about expert systems because they've been around for decades and simply work while being very limited and "unsexy" because they don't claim to give us AGI.


Corporate needs you to find the differences between this picture:

- layout of an expert system's components

and this picture:

- an agentic framework that uses an LLM as its reasoning system

They're the same picture :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: