Unfortunately, no one can really articulate the full set of human values yet, especially since they are often in conflict in real-world situations. Which value(s) should be prioritized over others? How about differing values between cultures and human groups within each culture?
You might have heard of the Trolley Problem. There are many and more complicated variations of that.
The problem with AGI is more acute than most other technologies because it is software-based, which makes it nigh impossible to regulate effectively, especially globally.
> Which value(s) should be prioritized over others? How about differing values between cultures and human groups within each culture?
Don't kill humans sounds like a good start.
Honestly, this problem seems a lot simpler to me than creating AGI itself.
I agree it is challenging. The most imminent question is probably, "should a self driving car be allowed to kill 10 people on the sidewalk to avoid a head-on collision, or should it accept the head-on and let the driver die?"
These domain specific problems will come up before the AGI one does, and we can address them as they become relevant.
You've noted that regulating AGI is difficult, and it sounds like you don't have any other solution in mind.
It is often not practical for something dumber to regulate something far more intelligent. (Humans win against lions despite our physical weakness.) So the best solution I have heard of is to create a Provably Safe AGI (or Friendly AI in Yudkowsky's term) and have them help us regulate other AI efforts. A moral core that aligns with human values need to be part of this Safe AGI.
It is definitely very challenging to create one, and more challenging than creating a random AGI. The morality also needs to be integrated into the Safe AGI as its core that is not susceptible to self-modification ability that an AGI could have. Thus, we need to work on that aspect of AGI now.
You might have heard of the Trolley Problem. There are many and more complicated variations of that.
This Harvard session on Justice (The Moral Side of Murder) is edifying and surprisingly fun to watch: https://www.youtube.com/watch?v=kBdfcR-8hEY
The problem with AGI is more acute than most other technologies because it is software-based, which makes it nigh impossible to regulate effectively, especially globally.