Hacker News new | past | comments | ask | show | jobs | submit login

Oh, tell me about it. I think that's the kind of innovation that's happening now with the drone warfare you see in Ukraine. People tell me that "Russia has 100k tanks to Ukraine's 10k", and I think: those high-level numbers do not matter; what matters is what happens when the two meet on the battlefield. If one $50k drone can consistently take out millions of dollars of equipment, it doesn't matter how expensive or numerous that equipment was, or how good it would be at fighting some hypothesised similar adversary.

The superpowers of the world have gone through several successive 'generations' of military technology without really having a war in which to use them. (Just skirmishes with pre-industrial desert and jungle people, and the occasional mismatched murky proxy war with export-grade technology.) These mega-elaborate aircraft carriers and fighter jets and tanks are like radar-enabled cavalry, and will be taken out with drones and handheld rockets, and whichever modern-day Kroll is clever enough to strategise will make an absolute, uh, killing.




As someone that works in ML, this is actually what is concerning to me. Everyone is talking about AGI (artificial general intelligence) but I don't think that's something of huge concern yet. We have already entered a world where you can create drone "mines". It is cheap and easy to build a drone that can have an explosive payload, hide, and automatically seek out enemy combatants or vehicles. (Note that drones are pretty difficult to detect) The tech is a little difficult now and requires oversight if you don't want to violate international war laws, but it is definitely possible (and rapidly getting better).

> If one $50k drone can consistently take out millions of dollars of equipment, it doesn't matter how expensive or numerous that equipment was, or how good it would be at fighting some hypothesised similar adversary.

Because this isn't true anymore. It is really a $1k drone being able to take out millions of dollars of equipment with a 70+% success rate. That's a real game changer.

We don't need AGI to for ML to be dangerous. We just need people to use existing algorithms dangerously and/or recklessly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: