Hacker News new | past | comments | ask | show | jobs | submit login

This is just doomerism. Even though this model is slightly better than the previous, using an LLM for high risk tasks like healthcare and picking targets in military operations still feels very far away. I work in healthcare tech in a European country and yes we use AI for image recognition on x-rays, retinas etc but these are fundamentally completely different models than a LLM.

Using LLMs for picking military targets is just absurd. In the future, someone might use some other variation of AI for this but LLMs are not very effective on this.




AI is already being used for picking targets in warzones - https://theconversation.com/israel-accused-of-using-ai-to-ta....

LLM's will of course also be used, due to their convenience and superficial 'intelligence', and because of the layer of deniability creating a technical substrate between soldier and civilian victim provides - as has happened for two decades with drones.


Why? There are many other types of AI or statistical methods that are easier, faster and cheaper to use not to mention better suited and far more accurate. Militaries have been employing statisticians since WWII to pick targets (and for all kinds of other things) this is just current-thing x2 so it’s being used to whip people into a frenzy.


Because you can charge up a lot more when adding hot and hyped features like LLMs instead of doing good engineering.


I don’t know for sure but I imagine getting blackballed by the defence department is not fun.


It can do limited battlefield reasoning where a remote pilot has significant latency.

Call these LLMs stupid all you want but on focused tasks they can reason decently enough. And better than any past tech.


> “Call these LLMs stupid all you want but…”

Make defensive comments in response to LLM skepticism all you want— there are still precisely zero (0) reasons to believe they’ll make a quantum leap towards human-level reasoning any time soon.

The fact that they’re much better than any previous tech is irrelevant when they’re still so obviously far from competent in so many important ways.

To allow your technological optimism to convince you to that this very simple and very big challenge is somehow trivial and that progress will inevitably continue apace is to engage in the very drollest form of kidding yoursef.

Pre-space travel, you could’ve climbed the tallest mountain on earth and have truthfully claimed that you were closer to the moon than any previous human, but that doesn’t change the fact that the best way to actually get to the moon is to climb down from the mountain and start building a rocket.


What? Why are you ranting like this -- I just said it can do enough limited reasoning to change the drone game...

why are you... kinda going off like this?

On top of that, why do you follow me around over the course of ... months making these comments. Its really extreme.


:eyeroll:


Dude you are literally obsessively stalking me in your comments... what is the deal honestly... why?


That seems like something a special purpose model would be a lot better and faster at. Why use something that needs text as input and output? It would be slow and unreliable. If you need reaction time dependent decisions like collision avoidance or evasion for example then you can literally hard wire those in circuits that are faster than any other option.


Yo, this wouldn't make flying decisions, this would evaluate battlefield situations for meta decisions like acceptable losses etc. The rest of course would be to slow.


Note that the IDF explicitly denied that story:

https://www.idf.il/en/mini-sites/hamas-israel-war-24/all-art...

Probably this is due to confusion over what the term "AI" means. If you do some queries on a database, and call yourself a "data scientist", and other people who call themselves data scientists do some AI, does that mean you're doing AI? For left wing journalists who want to undermine the Israelis (the story originally appeared in the Guardian) it'd be easy to hear what you want to hear from your sources and conflate using data with using AI. This is the kind of blurring that happens all the time with apparently technical terms once they leave the tech world and especially once they enter journalism.


The "independent examinations" is doing a heavy lift there.

At most charitable, that means a person is reviewing all data points before approval.

At least charitable, that means a person is clicking approved after glancing at the values generated by the system.

The press release doesn't help clarify that one way or the other.

If you want to read thoughts by the guy who was in charge of building and operating the automated intelligence system, he wrote a book: https://www.amazon.com/Human-Machine-Team-Artificial-Intelli...


The IDF explicitly deny a lot of things, which turn out to be true.


Yeah, but the Guardian explicitly state a lot of things which turn out to be not true also.

Given that the underlying premise of the story is bizarre (is the IDF really so short of manpower that they can't select their own targets), and given that the sort of people who work at the Guardian openly loathe Israel, it makes more sense that the story is being misreported.


The underlying premise of the story is bizarre (is the IDF really so short of manpower that they can't select their own targets)

The premise that the IDF would use some form of automated information processing to help select potential targets, in the year 2023?

There's nothing at all unrealistic about this premise, of course. If anything it's rather bizarre to suggest that it might be.

The sort of people who work at the Guardian openly loathe Israel

This sounds you just don't have much to say about the substantive claims of these reports (which began with research by two Israeli publications, +972 and the Local Call -- and then taken further by The Guardian). Or would you say that former two "openly loathe Israel" also? Along with the Israeli sources that they're quoting?


More likely, the IDF is committing a genocide and are finding innovative ways to create a large list of targets which grants them plausible deniability.


Just like… (checks notes)… oh yeah every government on the planet.


[flagged]


It is more sinister, it is apologia for genocide.


> Probably this is due to confusion over what the term "AI" means.

AI is how it is marketed to the buyers. Either way, the system isn't a database or simple statistics. https://www.accessnow.org/publication/artificial-genocidal-i...

Ex, autonomous weapons like "smart shooter" employed in Hebron and Bethlehem: https://www.hrw.org/news/2023/06/06/palestinian-forum-highli...


[flagged]


> a sophisticated approach to their defense

A euphemism for apartheid and oppression?

> sources are rife with bias

What's biased about terming autonomous weapons as "AI"? Or, sounding alarm over dystopian surveillance enabled by AI?

> The nuance matters.

Like Ben Gurion terming Lehi "freedom fighters" as terrorists? And American Jewish intellectuals back then calling them fascists?

> The history matters... After that, articles such as the ones you posted really read quite differently than most might expect.

https://www.wetheblacksheep.com/p/i-changed-my-mind-on-zioni...


I will spend no more than two comments on this issue.

Most people have already made up their minds. There is little I can do about that, but perhaps someone else might see this and think twice.

Personally, I have spent many thousands of hours on this topic. I have Palestinian relatives and have visited the Middle East. I have Arab friends there, both Christian and Muslim, whom I would gladly protect with my life. I am neither Jewish nor Israeli.

There are countless reasons for me to support your side of this issue. However, I have not done so for a simple reason: I strive to remain fiercely objective.

As a final note, in my youth, I held views similar to the ones you propagate. This was for a simple reason—I had not taken the time to understand the complexities of the Middle East. Even now, I cannot claim to fully comprehend them. However, over time, one realizes that while every story has two sides, the context is crucial. The contextual depth required to grasp the regrettable necessity of Israeli actions in their neighborhood can take years or even decades of study to reconcile. I expect to change few minds on this topic. Ultimately, it is up to the voters to decide. There is overwhelming bipartisan support for Israel in one of the world's most divided congresses, and this support stems more from shared values than from arms sales.

I stand by my original comment. As I said, this will be my last on this topic. I hope this exchange proves useful to some.


The total of these two comments make no objective claims, rather says there are nuances, and complexities. But in all these complexity they are sure that Israel is right in their actions. Bipartisan support is on shared values, supposedly. Not so surprisingly, it even has a > I have <insert race/group> friends paragraph.

I got to say this is a pretty masterful deceit.


> I strive to remain fiercely objective.

Commendable. You'll appreciate this Israeli historian: https://www.youtube.com/watch?v=xj_HKw-UlUk (summary: https://archive.is/dOP7g). And this Israeli Prof, also an expert on Holocaust studies, being fiercely objective: https://www.mekomit.co.il/ps/134005/ (en: https://archive.is/Fjj6f)

> I had not taken the time to understand the complexities of the Middle East. Even now, I cannot claim to fully comprehend them.

Why even spend 2 comments?

> The contextual depth required to grasp the regrettable necessity of Israeli actions...

The same level of depth as Supremacists who regrettably exterminated non-Aryans?

> There is overwhelming bipartisan support for Israel in one of the world's most divided congresses, and this support stems more from shared values.

This is undeniable, but the underlying "shared values" here are not the ones you'd like us to think: https://www.bostonreview.net/articles/instruments-of-dehuman...

> I stand by my original comment.

Like you say, there's the entire might of the US political and elite class behind you; it isn't some act of courage or rebellion, fwiw.

> As a final note, in my youth, I held views similar to the ones you propagate.

Propagate? Your final note sounds like a threat.


The nuance of Ben-Gvir presumably ...


It’s absurd but LLMs for military targets is absolutely something that some companies are trying to sell regardless of the many known failure modes.

https://www.bloomberg.com/news/newsletters/2023-07-05/the-us...

https://youtu.be/XEM5qz__HOU


I also work in healthtech, and nearly every vendor we’ve evaluated in the last 12 months has tacked on ChatGPT onto their feature set as an “AI” improvement. Some of the newer startup vendors are entirely prompt engineering with a fancy UI. We’ve passed on most of these but not all. And these companies have clients, real world case studies. It’s not just not very far away, it is actively here.


>Using LLMs for picking military targets is just absurd. In the future

I guess the future is now then: https://www.theguardian.com/world/2023/dec/01/the-gospel-how...

Excerpt:

>Aviv Kochavi, who served as the head of the IDF until January, has said the target division is “powered by AI capabilities” and includes hundreds of officers and soldiers.

>In an interview published before the war, he said it was “a machine that produces vast amounts of data more effectively than any human, and translates it into targets for attack”.

>According to Kochavi, “once this machine was activated” in Israel’s 11-day war with Hamas in May 2021 it generated 100 targets a day. “To put that into perspective, in the past we would produce 50 targets in Gaza per year. Now, this machine produces 100 targets a single day, with 50% of them being attacked.”


nothing in this says they used an LLM


But it does say that some sort of text processing AI system is being used right now to decide who to kill, it is therefore quite hard to argue that LLMs specifically could never be used for it.

It is rather implausible to say that an LLM will never be used for this application, because in the current hype environment the only reason the LLM is not deployed to production is that someone actually tried to use it first.


I guess he must have hallucinated that it was about LLMs


>Using LLMs for picking military targets is just absurd

You'd be surprised.

Not to mention it's also used for military and intelligence "analysis".

>using an LLM for high risk tasks like healthcare and picking targets in military operations still feels very far away

When immaturity and unfitness for purpose has ever stopped companies selling crap?


> picking targets in military operations

I'm 100% on the side of Israel having the right to defend itself, but as I understand it, they are already using "AI" to pick targets, and they adjust the threshold each day to meet quotas. I have no doubt that some day they'll run somebody's messages through chat gpt or similar and get the order: kill/do not kill.


'Quotas each day to find targets to kill'.

That's a brilliant and sustainable strategy. /s


I use ChatGPT in particular to narrow down options when I do research, and it is very good at this. It wouldn't be far-fetched to feed it a map, traffic patterns and ask it to do some analysis of "what is the most likeliest place to hit"? And then take it from there.


i don't know about European healthcare but in the US, there is this huge mess of unstructured text EMR and a lot of hope that LLMs can help 1) make it easier for doctors to enter data, 2) make some sense out of the giant blobs of noisy text.

people are trying to sell this right now. maybe it won't work and will just create more problems, errors, and work for medical professionals, but when did that ever stop hospital administrators from buying some shiny new technology without asking anyone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: