I will be utterly astounded if this "study" is replicated. I will not be remotely surprised if they sell it the gullible fools in public service who don't treat lie detectors with the utter contempt they deserve.
What are currently the best tech buzzwords that might convince people you have magic pixie dust for sale?
The linked study does not support or address lie detection.
It is about attacks on AI face recognition. (Making it misdetect or not detect the subject.)
This is actual fake reporting. Unless author has made a critical mistake when pasting the link to the study.
Which is somewhat less than impressive. The quoted "accuracy" is AUC which weighs all errors the same. It is also not a binary detector. For use in courtroom false positives are much more costly.
Remember this also says nothing about generalization power of the system. Nor how well humans could be trained to detect lies. (Untrained results were pretty impressive.)
There's plenty of science behind micro-expressions and detection of deception.
Paul Ekman [0] is "ranked 59th out of the 100 most cited psychologists of the twentieth century" and I thought the TV show based on his research [1] was very entertaining. (I am surprised that the article used the phrase "micro-expression" without any reference to Ekman's work.)
Your question seems to be inconsistent with HN guidelines: comments should be constructive.
Specifically, comments should encourage discussion and allow for multiple perspectives on topic. It appears to me that your comment is borderline name calling and is anything but constructive.
I fail to see how asking someone if they know what science is would help with a constructive discussion. Did you mean to imply that the micro-expression related work the the GP cited is not scientific in your opinion? Or are you questioning the GPs knowledge about science?
The problem I think is, lie detection software is kinda like jet plane software. Even at 99% percent effective can have terrible consequences when its 1% wrong.
If the idea is true, then wouldn't it actually be good?
Or, I guess there may be a distinction between "it is good to, for every thing, to be skeptical of that thing" and "for every thing, it is good to be skeptical of that thing".
I consider the statement that "for every statement, it is good to be skeptical of that statement, regardless of what other statements that one is or is not skeptical of" to be doubtful.
It seems fairly clear that skepticism which is directed in a specific way can be harmful?
This is interesting! The ground truth for the study couldn’t have been far removed from the “ground truth”. I am afraid this appears to me as a pattern where any dataset whatsoever is thrown at ML algorithms and the results are heralded as the new way of analyzing data. There is no regard to the source of the data, it’s deficiencies and utter disregard to its applicability to the problem at hand. See the question generation related deep learning papers based on SQUAD dataset.
You're extrapolating a lot here: this is a demonstration in a mock environment. There's no indication that this is going to be rolled out as-is and used as the final arbiter of justice. Yes, denying justice 10% of the time is unacceptable. But that's not what's being presented here.
As an aside, the comment you're referring to was dead on arrival. It looks like that account has been banned.
I tried for a while to think of a good retort to this article, I couldn't. So instead:
This is fucking retarded.
They should institute a law that if you are using machine learning to make major decisions about people's lives then you need to take a test about basic ML (test sets, validation sets, etc.)
The project is known as DARE. Interesting classifier but not really that great. Try it out, it is open source so far.
Just like many proof of concept it starts to rapidly fail when trained on too much data and otherwise generalize only so so.
This is essentially some feature engineering thrown at an SVM / GNN.
What makes the results even tougher to reproduce is that the training set is not provided, only resulting matrices.
It is telling when different classifiers work best for supervised and auto cases...
Even if they can, anyone who understands Bayes' theorem will tell you that the false positive rate would be way too high for this to be used in real life.
I'm reminded of the scene is "Ex Machina". If you've seen it you'll know what I mean. If you haven't, I won't spoil it for you. But it's (semi-)relevant.
Interesting point though I am not sure how the 5th amendment would apply here. Specifically, I thought that 5th amendment allows you to not answer a question or not testify. After you say something, it looks like using it to make a case against you is a fair game.
Accuracy of measuring what? Despite the term, they don't actually detect lies. At best, some of them can detect physiological "events", as when a particular question or word induces acute anxiety. Most interpretation of those events is highly subjective.
The closest thing I've heard of to an objective method, the guilty knowledge test, doesn't center on evaluating the truth of the suspect's statements at all. Basically, the authorities know certain confidential facts about the case, such as the location of a wound, characteristics of a weapon, characteristics of an item stolen by the perpetrator, non-public "calling card" of a serial killer, method used to break into a building, etc.. The interviewer (who should be unaware of the true fact, to prevent something like the Clever Hans effect) is given a list including the true fact and plausible alternatives to incorporate into the questions. The idea is that a reaction specifically to the true fact indicates that the suspect knows something that only the actual perpetrator would know. As far as I know, interpretation of these results is still ultimately subjective because there is simply too much variability in human psychology and physiology to define a useful objective standard of when the test is "failed".
If you believe they might work, the operator can do a good cop/bad cop routine with the machine to get you to admit to stuff, frequently stuff that never happened. But they could do the same with a magic rock if you believed in that.
What are currently the best tech buzzwords that might convince people you have magic pixie dust for sale?