I’ve had people look at me like I’m crazy when I say that the A.I. that will be a threat to humanity will be of Chinese origin. It’s this exact “succeed, at any cost” mentality that cements my belief.
The boosters should have their own flight termination system which should have exploded if they were going to fall anywhere near populated land. Everyone in the US has to have them even if they're launching from Florida over the ocean and each independent flight component (like rocket boosters) has to have its own.
Allowing a rocket to fall on land like that shows extreme contempt for public safety.
I think what OP might be saying is that the attitude that it's OK to just yeet a rocket booster at a populated area, might also not yield much safety when it's applied to Artificial General Intelligence.
Yes. As a guest or visitor. I am granted a lot of deference. Especially in someone's home. Chinese are very friendly.
But China is modernizing, progressing, very rapidly, and much of that has a human cost. So think it still is correct, that OP was implying, that China will not be as worried about controlling AI. As many fear, China will use AI for surveillance and control.
I imagine you're correct if we're talking about the elite enjoying more legal privileges than us mere commoners. Both counties do have have that in common.
You probably still have a >50% chance that it's going to be over water since that's the direction it was initially moving. Yes, if flight termination fails and by chance it vectors backwards towards land somehow without just spinning like out of control rockets normally do then it may fly far enough to wind up on someone else's property but that's why launch sites are located at a reasonable distance from inhabited areas. The point is that it's very different to take all of those precautions vs. just flying your rockets over inhabited areas as a matter of course and not giving a fuck if they land on someone.
In general I agree with the point you are making, but if you will indulge my "well actually..."
From to the Range Commanders Council Range Safety Group, 2010, summary of FTS requirements:
2. produce a small number of pieces, all of which are unstable and impact within a small footprint;
And also another requirement relevant to this particular video where the spent booster clearly had uncombusted carcinogenic hypergolic fuel when it hit the ground:
3. control disposition of hazardous materials (burning propellant, toxic materials, radioactive materials, ordnance, etc.);
- American AI “safety” practices have little or nothing in common with best practices in other fields of engineering, to the extent they can be said to exist at all (most AI safety work focuses on making sure the AI doesn’t say anything that might offend someone).
- When a rocket blows up, people die but we learn from the mistake. When an AI seriously “threatens humanity”, humanity dies and we very possibly don’t get a second chance.
A rocket booster falling on your head is a real threat.
AI taking over the world is, as yet, an imagined threat.
When AI proves it will take over the world and China builds theirs without the "don't take over the world" code mandated by every other country, let's talk more.
I don't feel like parent was talking "in the context of this video" as they mentioned AI, so I figured we left that context behind in this comment-tree. Seemingly, I was wrong.