AIUI, a Javelin missile ( https://en.wikipedia.org/wiki/FGM-148_Javelin ) works by pointing its camera at your target and holding it in-frame there for a few seconds while the missile learns what your target looks like well enough to home in on it once launched.
I imagine the Pentagon would pay billions for an LLM that could shave a couple of seconds off that target-acquisition time.
One high priority for the military has been using AIs to analyze massive volumes of data. A common example is analyzing satellite photos for interesting objects.
They say the want to output a shortlist to aid human decision-making, but of course they might decide to skip that step.
Propaganda - now any language on earth. Coming soon - personalized with the voice of your loved ones, voice application are currently happily harvesting.
I strongly feel that this is where the really destructive potential of military LLM applications lies. We've already seen examples of faked images, voice clips and videos being disseminated of prominent political figures. It's not difficult to imagine a well planned disinfo attack could paralyse or misdirect the political apparatus and public sentiment of a state during some critical moment in time (IE, during a coup attempt or invasion).
Everyone might be able to work out which snippets of media were true and which were not in a few days, but if it happens during a crisis point the loss of time and momentum could be catastrophic.