Hacker News new | past | comments | ask | show | jobs | submit login

I don't like anything about this war, but in a way, I think concerns of AI in warfare are, at this stage, overblown. I'm more concerned about the humans doing the shooting.

Let's face it, in any war, civilians are really screwed. It's true here, it was true in Afghanistan or Vietnam or WWII. They get shot at, they get bombed, by accident or not, they get displaced. Milosevic in Serbia didn't need an AI to commit genocide.

The real issue to me is what the belligerents are OK with. If they are ok killing people on flimsy intelligence, I don't see much difference between perfunctory human analysis and a crappy AI. Are we saying that somehow Hamas gets some brownie points for not using an AI?




I like this point, and I do think you're rightly pointing out that the issue is that selection of targets may be done badly, not that AI specifically is in the loop. With that said, I think an important detail you're overlooking is the frictionless-ness of this process. That quote people are throwing around about something like "efficiently producing the largest volume of human targets" gets to this point pretty directly I think. The problem is not just that the evidence might be flimsy, it's also that it's extremely easy to generate massive lists of targets.

Instead of the Milosevic example I'd say it's analagous to Dehomag machines during the Holocaust. The Nazis didn't need advanced database systems to attempt a genocide, but having access to them made it far far easier to turn the whole process into a factory line: something predictable and constant that allowed it to achieve a pace and scope far beyond what they would have been able to do otherwise. Similar here, or in other cases where advanced technology is brought to bear in war. Anything that makes human death more automated is, IMO, abhorrent and worth of criticism in it's own right.


I agree making something bad easier is bad too. But does AI make the bad thing easier here?

I see two cases here. One is that the AI has some non-negligible accuracy, and one where it doesn't. If it's somewhat accurate, then actually, using it is saving civilian lives, attacking only the active enemy.

And if it's inaccurate... Then presumably whoever made it knows it, and whoever uses it knows it's merely a fig leaf for shooting random people, and is ok with that. Is it then worse to kill random people as found by an AI than to drop a bomb somewhere, because you have a hunch there might be a worthwhile target there? This is the bit I'm not sure of.

In this war, it's so easy to find the other side. If you want to recklessly shoot civilians, they are just on the other side of the wall. I'm not sure that AI makes it any easier.


The accuracy point is a provocative and interesting question. I'm used to it in the context of ex. medical imaging or autonomous vehicles. In the context of picking bomb targets (where even a "positive" classification is kind of ambiguous [0]) I think it's probably above my pay-grade, so I'm going to set it aside.

> whoever uses it knows it's merely a fig leaf for shooting random people

I think this is the problem, but needs a little more unpacking, because IMO it goes beyond a pure 'fig leaf'. From what I understand it's not just a way to ID who is a combatant: it actively plans bomb targets. The difference is that a fig leaf provides purely pretense, and as you point out that's nothing new: we've had automated ways of ID'ing someone as a criminal or terrorist forever. But this not only provides the pretense of ID'ing someone as a combatant, it also loads the gun and aims it for you. So to me it's more than just someone saying "oh these people were all flagged, so let's plan an attack on them", it's actually the machine drawing up the full plan and just asking you "I found combatants should I kill then [Y]/N?". Both are bad (IMO), but the second one seems like a new evolution in the automation of warfare that I find uniquely concerning.

[0] Expanding on this point a little: combatant status seems ambiguous to me because it's not really a physically measurable variable. A car crashing or an image containing a tumor are all things that can be objectively verified, but the legal worthiness of killing someone for participation in a war is a far more ambiguous concept I think. Is someone who quarters enemy troops a worthy combatant? Someone who provides logistical support? I see lots of room for ambiguity that would be ugly to encode in data.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: