Hacker News new | past | comments | ask | show | jobs | submit login

The reason they don't have roadmaps to AGI is because they do not want AGI to be made before the Friendly AI problem has been solved.

When you think about the vast space of mind design space, plus all the ways we've made mistakes reasoning about even simpler & stupider optimization processes like evolution, I don't think they're being too silly.

If any of you want a quick and insightful introduction, this video is very good: https://vimeo.com/26914859




>The reason they don't have roadmaps to AGI is because they do not want AGI to be made before the Friendly AI problem has been solved

Right, which is an impossibility in my opinion. There is an inherent conflict between systems with asymmetric power and capability in a resource constrained environment. Trying to get around that fundamental principle is an exercise in futility.


Could you elaborate? "Systems with asymmetric power" presumably refers to the AGI - or does it? Maybe you are referring to the AI box or the utility function design or the "nanny AI" which is meant to contain the AGI? I don't know what "capability in a resource constrained environment" refers to because that could refer to pretty much anything in our universe or any finite universe.


before the Friendly AI problem has been solved.

That's like saying they aren't telling us how to make time travel because the grandfather paradox hasn't been solved.

They aren't withholding some unique secrets of the universe because we have unsolved problems... they are just slightly crazy people who can turn a phrase and pivoted that ability into an endless series of "grants" as to avoid needing to get real jobs contributing to GDP.

The output of their work is amusing, but just because someone runs around with veracity of purpose doesn't mean what they say is part of a ground truth reality.

Read it for enjoyment, but don't read it for belief.


> The reason they don't have roadmaps to AGI is because they do not want AGI to be made before the Friendly AI problem has been solved.

As if those are unrelated concepts. The more divorced they are from real world AI software, the less relevant their results will be.


I used to think that way too. Now I believe they are working on problems which will only become relevant - pulling a figure out of my ass just to put a figure on it - 30 years hence - but in 30 years time, we'll be damned glad that they did, and wish that they'd had more staff and more resources! If we even notice the problem before the world ends, or is saved.


> The reason they don't have roadmaps to AGI is because they do not want AGI to be made before the Friendly AI problem has been solved

Reminds me of: https://www.youtube.com/watch?v=_ZI_aEalijE




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: