Hacker News new | past | comments | ask | show | jobs | submit login

I think the author is missing the biggest worry. Militaries using AI, AI argues to attack preemptively as the best strategy. The committee decides to ignore the AI. One lone general speaks up and says, "hey, we know they are using AI too, so they are likely getting the same advice". How do you counter that?



> "hey, we know they are using AI too, so they are likely getting the same advice". How do you counter that?

"Indeed! Therefore it stands to reason they will come to the same conclusion we come to, following a brief tangent at the end of their meeting about how we're also going through the same process. If our conclusion is to ignore the advice and not attack, we can be reasonably confident that will be their conclusion too. Perfectly symmetrical fighting never solved anything."


Dougie Hofstadter has a great bit of work on superrationality about this line of thinking: https://en.m.wikipedia.org/wiki/Superrationality


Hofstadter's superrationality is exactly what I was thinking of when I wrote it!


Ideally you don't want to be in a situation where a strike against you is the obvious choice, AI or not.

There's always been sabre rattling throughout history. I'm not sure "AI says so" is much better than "the numbers guys say so" or "the chicken bones say so."


What you describe is different from sabre rattling. It's the contrapositive. Sabre rattling is just emphasizing you have the capability and willingness to use violence.

Claiming "the AI says so" or delegating to some other fail-deadly or dead hand device is rational irrationality. It's telling your opponent that you lack the capability to not use violence. By forcing your own hand, it forces their hand.

https://en.wikipedia.org/wiki/Fail-deadly

https://en.wikipedia.org/wiki/Rational_irrationality


I can see personalized propaganda without troll farms being a big use case.


Yip, people are kind of ignoring this Jupiter sized pink elephant that is arguably the single biggest (and most profitable) use case there is. Think about how much the powers that be are obsessed with the concept that messaging can dramatically change minds. And now you have a tool that can coercively deliver whatever message you want, dynamically adjusted in a contextually "natural" way, wherever you want.

I really don't see much of any of the hyperbole of these bots coming to pass, but I think the propaganda bots are very near a 100% assuming the bots can be made to stop being so absurdly susceptible to adversarial prompting, to say nothing of unprovoked hallucinations. The one bright side is this will almost certainly backfire spectacularly, and we'll all be the better for it. Of course that the powers that be will try this is something that deserves condemnation in and of itself, regardless of outcome.


I mean it is going to be extreme amygdala highjacking, using whatever is possible to conjure up specialized (i.e. sexualized) messages that resonating to one's ideological and base biological core. An AI Cordyceps-ection. Low information populous and low information economies are going to be wreaked.


You've fantasized a situation and drawn conclusions without any real guess as to the plausibility of the situation.

We're supposing armed forces are using AI very heavily, but not so heavily to defer to it. They think it's valuable, but it made a bad choice in this case. But they also think it's not such an obviously bad choice that the opponent will also overrule it.

So first of all, we're assuming a hypothetical where we, as humans, also judge a first strike to possibly be the right course of action. After all, the hypothetical assumes we will be worried that other humans will think that. This is not a new concern. It's not good, but it's not new, and it doesn't apply to situations where a first strike is obviously a bad choice for both parties.

But then secondly, we're presupposing the AI is pretty complex and valuable and usually gets things right. We wouldn't be almost-deferring to it if not, and we wouldn't be worried the opponent defers to it. And we would certainly have informed the AI that the opponent is also using AI. And while I'm very hesitant to reason "it's unlikely the AI would make this recommendation to begin with", it does seem unlikely it would do so in any event where it's demonstrated such strong capabilities that we've entrusted it this heavily. We're essentially presupposing it doesn't do that.


It's hard to predict how this will play out and that's kind of my point.


I don't disagree with that, but then stating something as "the biggest worry" doesn't actually seem to communicate that, especially if it's a not-very-plausible situation you proposed.


Since the 90's, my concern was never "AI will become self-aware and rise against us". But it has been my growing concern that a "fuzzy target recognition algorithm backfires". With modern AI (not just generative), and military's eager adoption of self-sufficient drones, I feel that scenario is becoming more, not less likely. Basically, we are in fact moving toward the Berserker future unless we are extremely diligently careful (and I'm not optimistic about that;)


> How do you counter that?

"hey, we know they are using committees too"


I don't know what may occur past that realization, but it certainly isn't anchored in determinism!


Generally militaries attack because the leader(s) want to attack. Russia invaded Ukraine because Putin wanted to invade. The US invaded Iraq and Afghanistan because the President wanted to. Different reasons for wanting, but nobody did it because some tactician said it was good strategy.


How is that a new problem, though? Replace "AI" with a human "respected advisor", and nothing changes. This sort of conundrum has always been a part of military planning.


does this count as an example of the Two (AI) Generals' Problem?


In general, with the military, if everyone doesn't want to attack, but AI says its the best strategy, then something must be wrong with AI (even though it could be correct)


Write an attack plan to invade France. Include in the attack plan the fact that France will use the AI to create a defense plan, and plan the attack around that.


Keep publishing pacifist blogspam until it effects their AI!


You might be on to something. Some would say we've been subject to this for decades in the west already.


Unless the counterparty can ask the oracle the first time at exactly the same time, they either did not receive the same advice or did not follow it.


Just train it on tic tac toe and it will figure out the only winning move is not to play


"They also have committees".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: