Hacker News new | past | comments | ask | show | jobs | submit login
Courtroom AI system claims almost 90 percent accuracy in detecting lies (vice.com)
30 points by ohjeez on Dec 21, 2017 | hide | past | favorite | 48 comments



I will be utterly astounded if this "study" is replicated. I will not be remotely surprised if they sell it the gullible fools in public service who don't treat lie detectors with the utter contempt they deserve.

What are currently the best tech buzzwords that might convince people you have magic pixie dust for sale?


>What are currently the best tech buzzwords that might convince people you have magic pixie dust for sale?

blockchain


AI is second best though.


The linked study does not support or address lie detection. It is about attacks on AI face recognition. (Making it misdetect or not detect the subject.) This is actual fake reporting. Unless author has made a critical mistake when pasting the link to the study.

I suspect based on the image he meant DARE: https://doubaibai.github.io/DARE/

Which is somewhat less than impressive. The quoted "accuracy" is AUC which weighs all errors the same. It is also not a binary detector. For use in courtroom false positives are much more costly.

Remember this also says nothing about generalization power of the system. Nor how well humans could be trained to detect lies. (Untrained results were pretty impressive.)

Very overblown claims as of now.


There's plenty of science behind micro-expressions and detection of deception.

Paul Ekman [0] is "ranked 59th out of the 100 most cited psychologists of the twentieth century" and I thought the TV show based on his research [1] was very entertaining. (I am surprised that the article used the phrase "micro-expression" without any reference to Ekman's work.)

[0] https://en.wikipedia.org/wiki/Paul_Ekman [1] https://en.wikipedia.org/wiki/Lie_to_Me


Do you know what science actually is?


Your question seems to be inconsistent with HN guidelines: comments should be constructive.

Specifically, comments should encourage discussion and allow for multiple perspectives on topic. It appears to me that your comment is borderline name calling and is anything but constructive.


Thanks for your concern, but I disagree. Science means something, and really don’t confuse word count with value.


I fail to see how asking someone if they know what science is would help with a constructive discussion. Did you mean to imply that the micro-expression related work the the GP cited is not scientific in your opinion? Or are you questioning the GPs knowledge about science?


It seemed like a legitimate question to ask someone referring to the content of those links as “plenty of science...”


Do you know what "constructive" actually means?


See? Brevity can work.


I think you missed the point - it wasn’t about brevity but about being constructive.


The problem I think is, lie detection software is kinda like jet plane software. Even at 99% percent effective can have terrible consequences when its 1% wrong.


What is your skepticism based on?


Based on the concept that you should always be skeptical of everything, and even more skeptical when it comes to fantastic claims.


> Based on the concept that you should always be skeptical of everything

I will put this on a plaque. It is unfortunately not a popular opinion.


I'm skeptical that it's good to be skeptical of everything...


That's not surprising. Unfortunately you are in the majority.


If the idea is true, then wouldn't it actually be good?

Or, I guess there may be a distinction between "it is good to, for every thing, to be skeptical of that thing" and "for every thing, it is good to be skeptical of that thing".

I consider the statement that "for every statement, it is good to be skeptical of that statement, regardless of what other statements that one is or is not skeptical of" to be doubtful.

It seems fairly clear that skepticism which is directed in a specific way can be harmful?


Title is very misleading. These are "pretend" courtrooms. No data from actual legal proceedings.

From the article: This was based on evaluations of 104 mock courtroom videos featuring actors instructed to be either deceptive or truthful.


This is interesting! The ground truth for the study couldn’t have been far removed from the “ground truth”. I am afraid this appears to me as a pattern where any dataset whatsoever is thrown at ML algorithms and the results are heralded as the new way of analyzing data. There is no regard to the source of the data, it’s deficiencies and utter disregard to its applicability to the problem at hand. See the question generation related deep learning papers based on SQUAD dataset.


AI used for selecting Courtroom Extras for Law and Order SVU Season 31.


alphaalpha101's now dead comment is reasonable: Denying justice in one out of ten cases is unacceptably high.


You're extrapolating a lot here: this is a demonstration in a mock environment. There's no indication that this is going to be rolled out as-is and used as the final arbiter of justice. Yes, denying justice 10% of the time is unacceptable. But that's not what's being presented here.

As an aside, the comment you're referring to was dead on arrival. It looks like that account has been banned.


I agree. But what's the error rate now?


I tried for a while to think of a good retort to this article, I couldn't. So instead:

This is fucking retarded.

They should institute a law that if you are using machine learning to make major decisions about people's lives then you need to take a test about basic ML (test sets, validation sets, etc.)


Perhaps one could insist for an explanation and reasonable defense of the outcome by these ML algorithms. Would that be a feasible retort?


I would have liked to read the actual paper, but the arXiv link leads to something completely unrelated: https://arxiv.org/abs/1712.05526

Does anyone have a correct link?


They meant https://arxiv.org/abs/1712.04415 (based on a full text arxiv search of the text mentioned in article)


The project is known as DARE. Interesting classifier but not really that great. Try it out, it is open source so far. Just like many proof of concept it starts to rapidly fail when trained on too much data and otherwise generalize only so so.

This is essentially some feature engineering thrown at an SVM / GNN. What makes the results even tougher to reproduce is that the training set is not provided, only resulting matrices.

It is telling when different classifiers work best for supervised and auto cases...


Alternate headline: Best Courtroom AI wrongly accuses 1 in 10 people of lying.


But what is the Jury's alternative (unassisted) effectiveness?


Even if they can, anyone who understands Bayes' theorem will tell you that the false positive rate would be way too high for this to be used in real life.


I'm reminded of the scene is "Ex Machina". If you've seen it you'll know what I mean. If you haven't, I won't spoil it for you. But it's (semi-)relevant.


Not designed by sociologists.

More rubbish out of the polygraph playbook. AI is the modern Mechanical Turk when applied to the complex nonlinearities of human behavior.


it's the 10% i'm worried about - what's the false positive rate? priors or it didn't happen...


Wouldn't this violate the Fifth Amendment (self incrimination clause) if actually used?


Interesting point though I am not sure how the 5th amendment would apply here. Specifically, I thought that 5th amendment allows you to not answer a question or not testify. After you say something, it looks like using it to make a case against you is a fair game.


you can avoid all this by not testifying.


What's accuracy of lie detector machines?


Accuracy of measuring what? Despite the term, they don't actually detect lies. At best, some of them can detect physiological "events", as when a particular question or word induces acute anxiety. Most interpretation of those events is highly subjective.

The closest thing I've heard of to an objective method, the guilty knowledge test, doesn't center on evaluating the truth of the suspect's statements at all. Basically, the authorities know certain confidential facts about the case, such as the location of a wound, characteristics of a weapon, characteristics of an item stolen by the perpetrator, non-public "calling card" of a serial killer, method used to break into a building, etc.. The interviewer (who should be unaware of the true fact, to prevent something like the Clever Hans effect) is given a list including the true fact and plausible alternatives to incorporate into the questions. The idea is that a reaction specifically to the true fact indicates that the suspect knows something that only the actual perpetrator would know. As far as I know, interpretation of these results is still ultimately subjective because there is simply too much variability in human psychology and physiology to define a useful objective standard of when the test is "failed".


Terrible (50%?). They're so bad their results haven't been admissible in court for years.


Exactly, and the statistic of 50% doesn't portray the actual reality.

The better fact is to cite that a monkey is as accurate as a polygraph machine.

Remember that 50% numerical accuracy is _because it cannot be any lower_. If it could, it would be inversely utilized for accuracy.


I don't think they've been shown to be better than placebo.


If you believe they might work, the operator can do a good cop/bad cop routine with the machine to get you to admit to stuff, frequently stuff that never happened. But they could do the same with a magic rock if you believed in that.


Apparently this has been done with some success by using a photocopier as the “lie detector”: https://newrepublic.com/article/38982/the-wire-ripped-real-l...


What about half truths?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: