Hacker News new | past | comments | ask | show | jobs | submit login

Legal question: if a writer loses a job due to a false accusal of using AI, would they win a damages lawsuit against the AI detector service if they can prove that AI content detection is knowingly imprecise?

That's why those types of services (including OpenAI's initial AI detector) often have huge legal disclaimers saying not to take it as 100% accurate.




Kinda like asking for the source code to the breathalyzer?

https://arstechnica.com/tech-policy/2007/08/im-not-drunk-off...


I would kill to watch some smartass show up to court with their own ridiculous-looking black box "breathalyzer" and draw the parallel to "why do you magically trust theirs and not ours?"


I feel like the only possible way for this to work would be to get the opposing side to prove that the ridiculous fake breathalyzer doesn't work only for the box to be opened and it is revealed that it is in actuality the same model of breathalyzer that was used against the defendant.

Courts (and authority in general) usually don't appreciate the "oh yeah, well from my point of view the jedi are evil" defense.

However, very occasionally, tricking the offense into skewering their own offensive strategy will work ... that is at least if the deciding authority is in a good mood and you can convince them you're the pragmatic choice.


Be the change you want to see in the world. And when you do, please use a cardboard box with "Very Real Breathalyzer" written in Sharpie on the side and maybe a Nixie tube sticking out the top.


And inside of it is their breathalyzer.


You beat me by four minutes.

I feel this is the only way this has a remote chance of actually working. However, I think you also need to make sure that the judge feels in on the joke as opposed to being the butt of the joke.


The answer would be simple: because we pay this other group with taxpayer money.


For those interested in just how corrupt the relation between completely bogus "science" and unregulated "industry" can become, check out the ADE 651 [0]

In 2016 a British company, Advanced Tactical Security And Communications (ATSC), started selling dowsing rods, basically made from old coat-hangers and an empty box as "long range explosive detectors".

From Wikipedia:

    """ The device has been sold to 20 countries in the Middle East
    and Asia, including Iraq and Afghanistan, for as much as US$60,000
    each. The Iraqi government is said to have spent £52 million on
    the devices """"
It's almost certain that serving personnel were killed while relying on the fake devices to detect IEDs. ATSC founder was convicted of fraud and sentenced to ten years in jail.

It's no surprise that the "AI detection" racket is going to be a field day for every hoaxer and huckster. But companies like Turnitin have long been selling "black-box" snake-oil "Plagiarism detectors" that have blighted higher education, ruined millions of student's lives, and created thousands of make-work hours for professors.

Really though, it's our fault that we keep treating technology as magic, failing to exercise radical scepticism and robustly challenge the low quality products pedalled by tech companies.

[0] https://en.wikipedia.org/wiki/ADE_651


I personally hope this will be the course of action we see being taken against these AI detector services. It is well-known to many that these are essentially snake oil salesmen, and it has been damaging in other contexts too, such as in academic settings where students are penalized for similar false accusations.

As LLMs continue to improve, it would seem that it is only going to get more difficult to accurately distinguish between content which is AI generated vs human generated, so barring some kind of AI-detection breakthrough, these services will continue to plague students and writers with horrible false positive rates.


I'm not sure it would matter here. In my understanding, the victim was a freelance writer / contractor, not an employee. It kinds sux, but I don't think most job protection laws would apply here irrespective of AI involvement. Obviously this depends a lot on country / jurisdiction.


Defamation, not job protection, the lost freelance job is the source of damages, not the basic legal wrong.


No. But you could sue your employer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: