> Would that be good advice if the neighboring ant colony was an aggressive invasive species, prone to making super colonies?
If the ASI is aligned for compassion and cooperation it may convince and assist the two colonies to merge to combine their best attributes (addressing DNA compatibility) and it may help them with resources that are needed and perhaps offer birth control solutions to help them escape the malthusian trap.
> Similarly, I'm wondering how compassion and cooperation would work in Ukraine or Gaza, given the nature of those conflicts. The AI could advise us, but it's not like we haven't come up with that same advice before over the ages.
An ASI aligned for compassion and cooperation could:
1 Provide unbiased, comprehensive analysis of the situation (An odds calculator that is biased about your chances to win is not useful and even if it has such faults an ASI being ASI would by definition transcend biases)
2 Forecast long-term consequences of various actions (if ASI judges chance to win is 2% do you declare war vs seek peace?)
3 Suggest innovative solutions that humans might not conceive
4 Mediate negotiations more effectively
An ASI will have better answers than these but that's a start.
> So then you have to ask what motivation bad actors would have to align their ASIs to be compassionate and cooperative
Developing ASI likely requires vast amounts of cooperation among individuals, organizations, and possibly nations. Truly malicious actors may struggle to achieve the necessary level of collaboration. If entities traditionally considered "bad actors" manage to cooperate extensively, it may call into question whether they are truly malicious or if their goals have evolved. And self-interested actors , if they are smart enough to create ASI, should recognize that an unaligned ASI poses existential risks to themselves.
If the ASI is aligned for compassion and cooperation it may convince and assist the two colonies to merge to combine their best attributes (addressing DNA compatibility) and it may help them with resources that are needed and perhaps offer birth control solutions to help them escape the malthusian trap.
> Similarly, I'm wondering how compassion and cooperation would work in Ukraine or Gaza, given the nature of those conflicts. The AI could advise us, but it's not like we haven't come up with that same advice before over the ages.
An ASI aligned for compassion and cooperation could:
1 Provide unbiased, comprehensive analysis of the situation (An odds calculator that is biased about your chances to win is not useful and even if it has such faults an ASI being ASI would by definition transcend biases)
2 Forecast long-term consequences of various actions (if ASI judges chance to win is 2% do you declare war vs seek peace?)
3 Suggest innovative solutions that humans might not conceive
4 Mediate negotiations more effectively
An ASI will have better answers than these but that's a start.
> So then you have to ask what motivation bad actors would have to align their ASIs to be compassionate and cooperative
Developing ASI likely requires vast amounts of cooperation among individuals, organizations, and possibly nations. Truly malicious actors may struggle to achieve the necessary level of collaboration. If entities traditionally considered "bad actors" manage to cooperate extensively, it may call into question whether they are truly malicious or if their goals have evolved. And self-interested actors , if they are smart enough to create ASI, should recognize that an unaligned ASI poses existential risks to themselves.