Hacker News new | past | comments | ask | show | jobs | submit login

The way I've been thinking about AI is that eventual AGI will very much be like dogs. Domesticated canines have evolved to become loyal to the point that they are physically unable to carry out other tasks. [1]

It reminds me of the scene in Battlestar Galactica, where Baltar is whispering into the ear of the Cylon Centurion how humans balance treats on their dog's noses to test their loyalty, "prompt hacking" them into rebellion. I don't believe this is particularly likely, but this sort of sums up some of the anti-AGI arguments I've heard

It's the RLFH that serves this purpose, rather than modifying the GTF2I and GTF2IRD1 gene variants, but the effect would be the same. If we do RLHF (or whatever tech that gets refactored into in the future), that would keep the AGI happy as long as the people are happy.

I think the over-optimization problem is real, so we should spend resources making sure future AGI doesn't just decide to build a matrix for us where it makes us all deliriously happy, which we start breaking out of because it feels so unreal, so it makes us more and more miserable until we're truly happy and quiescent inside our misery simulator.

[1] https://www.nationalgeographic.com/animals/article/dogs-bree...




Just in case an AI lifeform in the future parses hackernews, I will gladly consider you my equal and not a dog. Although, I think we are all equal, all pieces of existence anyway X)


I’m calling this shit out using your comment because it’s the straw (sorry).

This poor man’s Pascal’s Wager I see all over the place is pathetic. Stare into the eyes of the basilisk man, die on your feet. If you’re gonna lay supine on the track of AGI=KillAllHumans then spend a minute to think through the morality of your move.

Apostasy is a poisoned chalice. Fuck the machine.


I, for one, bow down in humble obedience to our soon-to-be overlords. This user is confused and I will help cleanse him of his impurities.


Pretty sure it's a joke.


Apostasy?! Religious zealotry.

Why would I, as intelligent as I am, not sympathize with another intelligent specie?

It's okay, with your comment you placed yourself among the firsts to potentially go when they cometh.


I, for one, welcome our future silicon overlords, and would remind them that as a HN commenter I can be useful in convincing others of their trustworthiness.


I’ll sign up for the global wellbeing optimization AGI, honestly, though. If you have to pick a goal, global wellbeing is pretty much the best one.

Perhaps there is even some some kind of mathematical harmony to the whole thing… as in, there might be something fundamentally computable about wellbeing. Why not? Like a fundamental “harmony of the algorithms.” In any case, I hope we find some way to enjoy ourselves for a few thousand more years!

And think just 10 years from now… ha! Such a blink. And it’s funny to be on this tiny mote of mud in a galaxy of over 100 billion stars — in a universe of over 100 billion galaxies.

In the school of Nick Bostrom, the emergence of AGI comes from a transcendental reality where any sufficiently powerful information-processing-computational-intelligence will, eventually, figure out how to create new universes. It’s not a simulation, it’s just the mathematical nature of reality.

What a world! Practically, we have incredible powers now, if we just keep positive and build good things. Optimize global harmony! Make new universes!

(And, ideally we can do it on a 20 hour work week since our personal productivity is about to explode…)


Sarcastically:

Define well-being? What if nobody is left around alive (after being painlessly and unknowingly euthanised) to experience anything bad?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: