I'm in favor of public discussions. It's tragic that there is a segment of the relevant powers that are doing their best to shut down any of these kinds of discussions. Thankfully, they seem to be failing.
But to answer the question, the danger of AI is uniquely relevant because it is the only danger that may end up being totally out of our control. Nukes, pandemics, climate change, etc. all represent x-risk to various degrees. But we will always be in control of whether they come to fruition. We can pretty much always short circuit any of these processes that is leading to our extinction.
A fully formed AGI, i.e. an autonomous superintelligent agent, represents the potential for a total loss of control. With AGI, there is a point past which we cannot go back.
I don't agree with the uniqueness of AI risk. A large asteroid impacting earth is a non-hypothetical existential risk presently out of our control. We do not currently plan on spending trillions to have a comprehensive early detection system and equipment to control that risk.
The differentiating thing here is that blocking hypothetical AI risk is cheap, while mitigating real risks is expensive.
We can work on it in the long term and things like developing safe AI may have more impact on mitigating asteroid risk than working on scaling up existing nuclear, rocket, and observation tech to tackle it.
But to answer the question, the danger of AI is uniquely relevant because it is the only danger that may end up being totally out of our control. Nukes, pandemics, climate change, etc. all represent x-risk to various degrees. But we will always be in control of whether they come to fruition. We can pretty much always short circuit any of these processes that is leading to our extinction.
A fully formed AGI, i.e. an autonomous superintelligent agent, represents the potential for a total loss of control. With AGI, there is a point past which we cannot go back.