Hacker News new | past | comments | ask | show | jobs | submit login

> Second, so what if a test was designed to trick up a model? Shouldn't we be determining when and where models fail? Is that not a critical question in understanding how to use them properly?

People are rushing to build this AI into all kinds of products, and they actively don’t want to know where the problems are.

The real world outside is designed to trip up the model. Strange things happen all the time.

Because software developers have no governing body, no oaths of ethics and no spine someone will end up dead in a ditch from malfunctioning AI.




> The real world outside is designed to trip up the model. Strange things happen all the time.

Counterpoint: real world is heavily sanitized towards things that don't trip human visual perception up too much, or otherwise inconvenience us. ML models are trained on that, and for that. They're not trained for dealing with synthetic images, that couldn't possibly exist in reality, and designed to trip visual processing algorithms up.

Also:

> People are rushing to build this AI into all kinds of products, and they actively don’t want to know where the problems are.

Glass half-full (of gasoline) take: those products will trip over real-world problems, identifying them in the process, and the models will get better walking over the corpses of failed AI-get-rich-quick companies. The people involved may not want to know where the problems are, but by deploying the models, they'll reveal those problems to all.

> Because software developers have no governing body, no oaths of ethics and no spine someone will end up dead in a ditch from malfunctioning AI.

That, unfortunately, I 100% agree with. Though AI isn't special here - not giving a fuck kills people regardless of the complexity of software involved.


> They're not trained for dealing with synthetic images, that couldn't possibly exist in reality, and designed to trip visual processing algorithms up

Neither of these claims are true. ML is highly trained on synthetic images. In fact, synthetic data generation is the way forward for the scale is all you need people. And there are also loads of synthetic images out in the wild. Everything from line art to abstract nonsense. Just take a walk down town near the bars.

> not giving a fuck kills people regardless of the complexity of software involved.

What has me the most frustrated is that this "move fast break things and don't bother cleaning up" attitude is not only common in industry but also in academia. But these two are incredibly intertwined these days and it's hard to publish without support from industry because people only evaluate on benchmarks. And if you're going to hack your benchmarks, you just throw a shit ton of compute at it. Who cares where the metrics fail?


> Because software developers have no governing body, no oaths of ethics and no spine someone will end up dead in a ditch from malfunctioning AI.

The conclusion and the premise are both true, but not the causality. On AI, the Overton window is mostly filled with people going "this could be very bad if we get it wrong".

Unfortunately, there's enough people who think "unless I do it first" (Musk, IMO) or "it can't possibly be harmful" (LeCun) that it will indeed kill more people than it already has.

The number who are already (and literally) "dead in a ditch" is definitely above zero if you include all the things that used to be AI when I was a kid e.g. "route finding": https://www.cbsnews.com/news/google-sued-negligence-maps-dri...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: