Hacker News new | past | comments | ask | show | jobs | submit login
An AI Lie Detector Is Going to Start Questioning Travelers in the EU (gizmodo.com)
203 points by atlasunshrugged on Nov 1, 2018 | hide | past | favorite | 202 comments



Why are lie detectors legal and why were they ever used for anything? We know they don't work. And we know this one won't work any better, because now they've just put the work of sentiment analysis on "AI" vs. a "trained expert".


Because movies show them as working and the general public is uniformed. Same goes for forensic tools that are actually possible compared to what TV shows like CSI show.

Iraq and other countries purchases millions of dollars worth of phony bomb detectors. Something anyone with a bit of science education can debunk.

Look at all the things on Kickstarter that are obvious pipedreams or scams yet people invest.


Reminds of local towns buying expensive “smart benches” that really no one uses. Easy way to spend some tax-payer money and give the appearance of investing into sophisticated technology, when it's all cheap off the shelf components from China.


>Iraq and other countries purchases millions of dollars worth of phony bomb detectors. Something anyone with a bit of science education can debunk.

https://www.vanityfair.com/news/2015/06/fake-bomb-detectors-...

This is a hilarious read.


This is not hilarious, it's appaling. The person selling this and people buying this are intentionally putting a lot of other people's lives at risk. I fully believe all involved parties should be tried for something at the level of attempted homicide.


Even if they did work, it would be awful.

Our society was built on large amounts of discretion being exercised by law enforcement and the ability to convincingly lie to law enforcement.

Society is going to get flipped on its head if we are able to detect total truth, and enforce all laws with 100% effectiveness. Even worse when these tools will be mostly in the hands of the government, and not the people.


There's a sci-fi novel called _The Truth Machine_ that looks at what happens in a society when a perfect lie detector is introduced.


> Our society was built on large amounts of discretion being exercised by law enforcement

This I get.

> and the ability to convincingly lie to law enforcement.

This less so. Can you elaborate?


Sometimes, law enforcement can only be lenient (the first part of my original post can only hold) if it's "on the record" that you say something excusable. The LEO might know it's probably a lie, but either they understand a circumstance doesn't merit citation, or they know it's a crappy law.

If everything were some day 100% honest, the polite niceties of "oh sorry didn't see the speed limit change" or "Forgot I left my pocket knife in carryon, sorry" would go away. Of course these are only two of countless examples.


Eliminating the shit laws is a much better solution than ignoring them or working around them with lies.


It seems to me that the concept is basically analogous to the idea people have that we should eliminate the human element from contract enforcement. Or suppose we tried to eliminate all tolerances and slippage in engineering. It's utopian/dystopian and even seriously attempting it would be disastrous, in my opinion. I don't see why people need to touch the hot stove to understand that.


I don't think that's reasonable. Speed limits aren't shit, but it's good to have officer discretion on it.


so... add a clause for officer discretion, like is already implied to exist? having it as an invisible clause doesn't make it any more or less game-able.


If you cannot lie to law enforcement it makes it a lot harder to overthrow a fascist regime for example.

"Do you have any intention of overthrowing the government?" - "No" - beep, and now you're in a forced labor camp


Are you currently engaged in an activity that may not be in the interest of the Monarchy?


> Society is going to get flipped on its head if we are able to detect total truth, and enforce all laws with 100% effectiveness. Even worse when these tools will be mostly in the hands of the government, and not the people.

Even if these tools worked, they wouldn't be able to detect truth, only belief.


Yes, tools like this can be so focused on accuracy and efficiency they forget to account for the value of paper bag compromises: https://www.youtube.com/watch?v=e9YgBF58Qks


Five years of reading HN and I’ve seen comments from all sorts of people.

Former inmates, drug addicts, surgeons, lawyers, mechanics, welders, astrophysicist, particle physicists, pilots. Need I go on?

Not once have I read anyone here who’s admitted to being a high level politician.


Forgive me for being thick, but is your point that we'd need to have a politician's insight to understand the decision?


To paraphrase, the question was "why is this legal when we all know it won't work?" and the answer was "we aren't the people who write the laws".


Its legal when a whole bunch of suits standing behind the politician who uses there media priveledge to amplify an issue, are able to syphon without scrutiny, as many tax paying dollars as possible.


It’s not so much about the fact that taxes are levied as that there is a lack of accountability for how it’s used.


It's real easy to attach a 'hot issue'/ buzzword to a blackbox that allows the security forces to do do whatever they want and get yourself sweet defense funding [0]

[0] https://slate.com/technology/2013/04/dowsing-for-bombs-maker...


Makes sense! Thanks.


In fairness, there aren't that many high level politicians. And even if one were to read posting and admitting that fact would probably be bad, career-wise.


You could always keep who you are anonymous, 'Member of Congress' isn't identifying enough to be damaging.


'Member of Congress' can be narrowed down pretty quickly simply to a comparatively short list by considering which Senators or Representatives are slightly likely to be interested in reading about mostly Californian tech companies on a website called Hacker News.

That anonymity disappears much more quickly if said Member of Congress has also posted in the past about where they're from and which political issues they're passionate about...


It's a small enough category and people will be interested enough that there's a good chance that a dogged reported could figure it out in a week given a long posting history. I'm pretty sure that if I admitted to being, e.g., the Zodiac Killer or something and people believed it then the FBI would probably be able to figure out my identity even without the link to my Github in my profile. And more importantly you can never be sure that you haven't let something slip that might reveal your identity in a way you didn't anticipate.


Never thought about it but you’re right.


Hey, the first iteration "was only tested on 30 people". It must work for sure. (Seriously, wtf, 30 people?!?)


I hope the test set included politicians - you need to have some subjects who are verifiably liars.


Another thing that occurs to me, especially given recent publicity, is how this affects privacy. There are some/many things that are private and which you do not want the government or corporations to know. It is outside criminal actions and therefore none of their business, but if lie detectors become omnipresent, then it will become more and more tempting to use them everywhere to make things "more efficient" and to start everyone and their dog to asking more and more things that are none of their business.


I can see the point of trials: as the article says, people told to lie will act very differently from people who want to lie, and a bunch of real-world data versus some lab data is surely going to improve it. I am not convinced that there is nothing about a person that gives away, at least in a good number of cases, whether someone is lying.

I'm just wondering if it's too much to hope that they'll base the decision whether to implement it beyond the trial on data and not on marketing.

And there is of course the argument that it should be possible, albeit with a lot of effort and/or a reasonable punishment, to break the law. (If this was impossible, we'd never get out of regimes we don't agree with, and with the advance of technology I'm afraid we're heading somewhere where a small group is actually able to oppress a huge number of people.)


Though they are not perfectly accurate, they are generally indicative, so it's useful in a statistical game.

Border agents are trained to look for all sorts of things, none of which make you guilty, but a few of which may make you suitable for more questions or a quick search.

So I'd imagine it'll get used in the same context.


I have seen my algorithm outperform me by a large margin at something I am an expert in. The task was a highly ambiguous visual analysis. The trick is that humans focus too much on strong signals. Algorithms are much better at correctly aggregating lots of weak signals.


Did the evaluation of algorithm's superior performance included both false positives and false negatives?

Personally, I don't like the concept of deploying algorithms for which we don't understand how they work, in scenarios of high health or social impact. It's cute to deploy DNNs to classify cat pictures, as when - not if - it spectacularly fails, nothing of value is lost. But if you can't point at the verified model the algorithm follows, or at least trace the algorithm after the fact, understanding and assessing every step it took, then that algorithm has no place in making decisions about people's lives.


They make a good headline. In this case the Goverments of those countries love headlines that are related to their borders. AI guarding your border sells.


How do you know AI lie detectors don't work?


The "Burden of Proof" lies on the entity making a claim [0]. Besides, we do know that polygraphs don't work [1].

[0] https://en.wikipedia.org/wiki/Burden_of_proof_(law)

[1] https://www.wired.com/story/inside-polygraph-job-screening-b...


We know that a polygraph does not work, but how do we know that this won't work without real-world data?

And nobody is being accused of anything, there is no burden of proof involved yet. If you mean that we can't subject someone to a lie detector while asking questions, i.e. we cannot subject anyone to anything without a reasonable suspicion, then we couldn't have airport security at all unless someone tweeted "today's plane is gonna be da bomb!"

Which would be a lot better than the current state of things in the USA, but that's not how this works.


Because there is no plausible existing technology by which it would.

And it's a similar promise to a lot of other over-promising AI technologies that don't work, like "fake news detectors".


"Because there is no plausible existing technology by which it would"

And how do you know that?


The polygraph is as useful at catching lies as the Scientologist E-Meter (1) is to testing religion conviction. http://www.cs.cmu.edu/~dst/Secrets/E-Meter/


This isn't a polygraph. The article calls it a lie detector and we call polygraph tests lie detectors but the claimed mechanism here is a bit different.

I'm not for it, but it's not a polygraph as described.


Except this is new tech, and your only argument is that the old tech doesn't work, which I agree with.


AFAIK we don't have a single reliable mechanism of action to reliably detect lying. ML won't change that - without a known mechanism of action, how can you define reliable features to train on?


Heck, the concept of lying itself is a philosophical minefield. No way they solved that. This thing, as stated in the article, just relies on physiological indicators and muscular activity. So it is a polygraph with a shiny layer of buzzwords sprayed on.

All in all, just another worrying piece of news from Hungary.


They don't really need to detect lying, they need to detect deception. Which is different, and, I'm pretty sure, possible to detect in most cases with advanced enough technology.


So your argument is that you don't know of a mechanism, therefore such a technology is impossible?


No offense, but you have no background in science/logic, do you? Of course I didn't say that. One cannot prove that something doesn't exist. That's why I didn't write it.

Please take a step back and look at all the replies to you. They all essentially say the same: We don't know of any reliably objective measurement for measuring lying. Humans can sometimes do that intuitively, yes, but to my knowledge there hasn't been a single successful attempt of a generalization of detection. That's why state of the art means using trained humans, like the airport security in Israel.

Without a mechanism to detect lying without using humans, no amount of statistics (which ML boils down to) can help you. ML can't change the fact that you don't know if the data point is a signal or just noise. Without knowing what the signals look like, ML can't calculate the noise away.

Even simpler: For ML you need a dataset with people lying and people saying the thruth. That's what you train the model on. But we have no way of building that dataset, because, to my knowledge, we don't have a way to reliably build up this dataset: We can't reliably detect if someone is lying.


This isn't new tech, it's going to use micro-gestures which is a questionable science. AI does not mean anything, you need inputs.


This is an interesting article[1], the opinion of the author is that no reliable studies have been conducted as to if humans can detect deceit at all. He is skeptical that machines can deduce deceit using micro expressions.

Quote: "Looking for cues to deception merely from ephemeral facial micro-expressions is questionable and likely fruitless. Micro gestures may be indicative of internal emotional turmoil that is being suppressed, but that is it. The distinguished Paul Ekman, who in fact coined the term micro - expression has stated in his book Telling Lies that micro expressions are rare and they "don't occur that often" (Ekman 1985, 131, 165). Plus as others have said, there is no single behavior indicative of deception (Matsumoto et. al., 2011, 1-4). I am concerned that machines that focus solely on the face will no doubt miss other information from the body (sweating, jittery hand, etc.) or generate lots of false positives because negative emotions abound especially where such machines are intended such as airports (stress of travel, stress of being subjected to searches, or inconvenient interviews, etc) or in a police setting."

[1] https://www.psychologytoday.com/ca/blog/spycatcher/201203/th...


After this poor person getting downvoted six levels deep while asking reasonable questions, finally a comment that answers the question. I was wondering the same thing as u/bufferoverflow: sure, polygraphs don't work, but I've never heard of microexpressions being pseudoscience before. Everyone acts as if it's a given that whatever possible lie detector you come up with, it has to be snakeoil.


As always, the burden of proof lies (knee slap!) with those who claim a lie detector does work.


I didn't make any claims, but the comment I replied to did.



Because there's no reliable way which you can tell if people lie or not, putting AI in front of it won't change any of that.


Because _we know_ that AI lie detectors can't currently prove beyond reasonable doubt that someone is lying. https://en.m.wikipedia.org/wiki/Hitchens's_razor


Repeated lie detector tests do work for most people. It's silly to think that your average joe is trained to control his/her biological responses to that extent. 1 lie detector test never works, a lot of them do.


Why do you think these won't work? I would think they have a dataset that they're validating them on if they're calling it "AI".

I think it's unlikely that they'll be perfect, but they may very well be good enough to filter people for further, more intensive screening.


>Why do you think these won't work? I would think they have a dataset that they're validating them on if they're calling it "AI".

You'd be surprised. AI is just a catch-all marketing term, see for example IBM's Watson umbrella brand, that's sold like advanced AI but has had tons of fiascos, and is shown to be different sets of often simplistic implementations.

E.g: https://www.massdevice.com/report-ibm-watson-delivered-unsaf...

https://spectrum.ieee.org/the-human-os/robotics/artificial-i...

https://www.theverge.com/2018/7/26/17619382/ibms-watson-canc...


You're saying these things won't work because IBM misused the word 'AI'?


No, he's saying that we've seen all kinds of absolute garbage marketed using the words 'AI' (Watson is at the relatively complex and useful end of the spectrum) so your suggestion that this project's use of 'AI' as a marketing term implies anything about whether they have an adequate dataset to validate it on is frankly hilarious.

In this case, there's excellent reason to be sceptical about whether the service has a remotely adequate dataset, because even if lie detection were a simple problem, how on earth are you going to get calibration data involving hundreds of data points of real airline passengers lying about having weapons in their luggage or intending to overstay their visa? Never mind calibration data by gender, ethnicity and nationality...


> No, he's saying that we've seen all kinds of absolute garbage marketed using the words 'AI' (Watson is at the relatively complex and useful end of the spectrum) so your suggestion that this project's use of 'AI' as a marketing term implies anything about whether they have an adequate dataset to validate it on is frankly hilarious.

Usually when people say 'AI', IBM included, what they mean is fairly trivial machine learning. Machine learning requires a dataset to learn on.

> In this case, there's excellent reason to be sceptical about whether the service has a remotely adequate dataset, because even if lie detection were a simple problem, how on earth are you going to get calibration data involving hundreds of data points of real airline passengers lying about having weapons in their luggage or intending to overstay their visa?

You could do it by watching film of interactions with border agents when it is still unknown whether or not the person was lying, and then use whether or not they were caught smuggling (or whatever) ex-post as your ground truth.


> You could do it by watching film of interactions with border agents when it is still unknown whether or not the person was lying, and then use whether or not they were caught smuggling (or whatever) ex-post as your ground truth.

You're going to need a lot of film. Good luck picking the controls for that that ensure your ML system picks up on subtle differences in humans that are lying and not glaringly obvious differences in appearance, accent and even background noise between the footage of the convicts and the control group...


> You're going to need a lot of film. Good luck picking the controls for that that ensure your ML system picks up on subtle differences in humans that are lying and not glaringly obvious differences in appearance, accent and even background noise between the footage of the convicts and the control group...

I assume you would isolate the people in pre-processing, and do feature extraction of things you might think are interested in. Yes, it'd still require a lot of data, but that data may exist somewhere. A lot of these things are filmed.


If anybody had working lie detecting AI, border control pilot in Greece and Hungary (as per TFA) is the last place you'd see it used.

From CIA, and TSA, to your local police department, everybody would have jumped all over it.

This is some vendor managing to sell some BS to bureaucrats (probably with the required bribes).


> From CIA, and TSA, to your local police department, everybody would have jumped all over it.

If CIA had it, you wouldn't hear about it. I don't see any reason why these countries couldn't pilot it for their border security before TSA, though.

FWIW, polygraphs are still used for security clearance screening. So, they are at least accurate enough for that use. They get a bad rep because they aren't admissible in court, but it's not like they have zero correlation to truth telling. They are still quite accurate.


No, I'm saying people selling something as "AI" doesn't mean:

a) it'll work,

b) it's actually any sort of AI,

and that, for example, even reputable companies like IBM can peddle all kind of shit as AI.

So, IBM was used as an example of a common practice, not as some logical axiom that "because they did it, nothing AI will ever work". Example != logical necessity.

It does, however cast doubt, especially for such a thorny problem as "lie detection", that nobody has, thus far, solved.


Well, let's back up. Lie detection is not completely unsolved. Polygraphs are used extensively by security services for clearance screening. They are reasonably good barometers of truth telling. They are not good enough to be admissible in court, but that doesn't mean they have no correlation to the truth.

Even if all this "AI" does is match the quality of a polygraph, it is probably extremely useful in screening people rapidly at the border.


I would think they have a dataset that they're validating them on if they're calling it "AI".

You obviously don't work for IBM :)

A skilled IBM Watson sales consultant as capable of selling simple linear regressions as Big Data AI!


You know with a big enough corpus of successful sales I think we should be able to train our Sales-Bot to sell simple linear regressions as Big Data AI!


I wish that machine learning wasn’t being mislabeled as AI. It’s pattern recognition, not intelligence.


It’s pattern recognition, not intelligence.

While I don't disagree that "AI" as a term is being abused, the distinction you're making seems very much a philosophical one.


One could argue intelligence is pattern recognition. One important aspect of human intelligence is neocortex -- a part of brain -- that exists in all mammals and mammals only. It serves as a pattern recognizer in mammal brains that helps them recognize certain sound, day/night, visual patterns etc...


Of course. All i'm saying is that presumably they've at least cross-validated that linear regression.


All the lie detection methods I've seen are based on questionable science with high false positive rates, there's no known method to know for sure if someone is lying and I don't see how adding "AI" on top of it will change that.


You mean exactly how existing lie detectors claim to work? They slapped AI on an old concept (that isn't useful and has been proven to be worthless a million times over).


Existing lie detectors use a bunch of information like pulse, and galvanic skin response. Presumably this device is not hooked up to people enough to do that, so I don't think it's using the exact same metrics. Also, polygraphs work decently well actually. Just not well enough to be used as evidence in court.


> I would think they have a dataset that they're validating them on if they're calling it "AI".

Where did they get that dataset from? Did they ask a bunch of people a bunch of questions and then hired private investigators to verify the answers (and then hire extra investigators to cross-verify answers given by the previous bunch)? Because beyond something like this, there's no known way to create such a dataset.


I've considered this. You could create the dataset by looking at interviews with people caught smuggling, before they were caught. Or similar for any criminal activity.


Honestly creating that dataset would be the easiest part of this. Just ask people questions that you already know the answer to.


The imperfection is large enough to cause serious concern.


You know what the error rate on these devices is?


Not to put a fine point on it, but "the team at iBorderCtrl" are charlatans.

Their system is supposed to detect lying from facial expressions. The only kind of "science" purporting to back this possibility is the work of the psychologist, Paul Ekman, which is based on flimsy evidence at best. A gigantic hint that this "research" is a bunch of hooey is his claim to have identified 29 "wizzards of deception detection" [1]- in very, very literal terms those are people with the magickal power to tell when someone is lying just by looking at their face (and magickally perceiving revealing facial expressions, subconsciously).

The iBorderCtrl system might indeed be replaced by a wizzard, or, why not be more inclusive, a witch, with a magick wand pointed to the traveller, while questions are being asked of them. If the traveller is telling the truth, the wand will jump up, if they're lying it will dive down. The witch is not moving the wand! She's only channeling the MAGICK!

The same magick being channelled by this revolting misuse of technology for the most odious purpose imaginable. It is not a coincidence that this "prototype" is being deployed in Hungary, the country in the EU that has embraced populist, xenophobic tendencies as no other, having elected a master of the craft, Viktor Orbán, as a prime minister and head of government.

This is such utter, utter bullshit. I cannot believe that the Commission agreed to all this. What the fucking fuck.

___________

[1] http://www.communicationcache.com/uploads/1/0/8/8/10887248/o...


Maybe it will be used to justify probable cause. Maybe they pick the people they want to search using other means (digital surveillance) and then create a probably cause using this fake machine.


Hey HN. An update on this. It turns out, iBorderCtrl are using microexpressions, which is to say, Eckman's work. This is from the Commission's website, the link posted here by another user:

The unique approach to ‘deception detection’ analyses the micro-expressions of travellers to figure out if the interviewee is lying.

http://ec.europa.eu/research/infocentre/article_en.cfm?artid...


> I cannot believe that the Commission agreed to all this.

I can. I mean, the driving forces in the EU got extremely xenophobic. Italy is ruled by the fascist Salvini, the PiS in Poland are not far behind Orban, the UK government has been xenophobic for decades, France suffers from xenophobia too (after the terror attacks, though) and our own Horst Seehofer risked collapsing the German government over (literally) 0 migrants (per https://www.focus.de/politik/deutschland/seehofers-deal-zahl...)...

What are human rights, what is democracy worth, when there is no one left to enforce them? The EU won't do shit against Hungary, Poland or Italy (as there is a 100% consensus required!), the US under Trump are going isolationist and trampling on human rights wherever they can, and the UN Security Council is powerless against the stuff that Russia, China, Saudi-Arabia etc. do because there's always a veto power that bails out their "friends".


[flagged]


> I don't think anyone would object to Swiss, German, US, Australian or Japanese immigrants... The issue is inflow of immigrants of different cultures, who don't assimilate - and it's not unreasonable to judge those cultures as inferior and unwelcome (in the EU we don't force our women to wear masks and stone them for infidelity...).

Since you point out types of immigrants people wouldn't object to, I also think no one would object to Saudi sheikhs if they were to immigrate. I increasingly feel religion and culture is just a proxy for wealth. A lot of people in Poland don't like Ukrainian immigrants either (of which we have plenty), even though they're culturally very close to us. I'm increasingly believing that it's all really about wealth, and the perception of being exploited.


Well, there are over 2 milion Ukrainians in Poland most of them working and living legally (with different legal status) so the 'Xenophobic' Poland had quietly accepted twice more refuges (as Ukraine is more or less fallen state like Libya) then Germany which is open to immigration and refugees.

Remember that Poland did largest immigration experiment in Europe's history in late Middle Ages and that was a failure. After 500 hundred years these migrants where still living in ghettos, speaking foreign language and preserving their religion. It is pretty much taken as a fact by Polish people that cultural and religious differences cannot be bridged in 500 years.


Not sure about this, here in London I don't think people support Saudi sheikhs, despite being generally pro-immigration (possibly in contrast to the rest of the UK). Although you're right that almost no politician would oppose immigration of wealth(y).

Edit: Although I do agree that there might be an effect similar to this, I think it was also present in Slovenia versus other Yugoslavian people. I wonder how much it has to do with the perception of wealth directly, and feeling threatened by "inferior" culture - there's probably a correlation, in addition a poor society is percieved as more likely to immigrate in larger numbers.


> (possibly in contrast to the rest of the UK)

From polls I've seen, I think most of the UK is pro-immigration. The disagreement is over uncontrolled immigration, which is a bit different.

Edit - I'm also in London, and have definitely heard people complaining about rich Saudis buying everything up. (And rich Russians before that.) I suspect there's a useful comparison to be made with US anxieties wrt Japanese investment back in the 90s.


> I also think no one would object to Saudi sheikhs if they were to immigrate.

Considering the amount of distaste for anything Saudi in most people I see across several countries around here I very much dispute that.


In US maybe, but in Poland, I'm probably the only person out everyone I physically know who is even aware of the story. Here Saudis are mentioned mostly in anecdotes about obscenely rich people and how alcohol finds its way there even though it's technically banned.


Saudi Sheiks? Not true, at least I’m France. Plenty of people get pissed when one of the princes closes down a beach each year for his holiday and essentially takes over a small village with the apparent support of police. Also, if you have ever interacted with many “sheiks” and their entourages, you’d quickly learn just how undesirable they often are. Russian oligarchs aren’t typically welcomes with open arms either.


If you get too many wealthy people, you get a vancouver, seattle, london or SF situation and that isn't very healthy either.

I think you need the people to be a similar economic makeup as the destination country, otherwise all the rich people buy you all out and everything is too expensive or all of the poor people act like an influx of 'criminals' who screw up everything according to the perceptions of the receiving populace.


>I also think no one would object to Saudi sheikhs if they were to immigrate.

Plenty of people would complain more loudly about Saudi sheikhs than anyone else. There are a lot of people who want to live their lives without having their lives disrupted by large influxes of people with a completely different culture, and it has nothing to do with wealth. The populist backlash you have seen across the globe recently is largely a result of these people having nowhere else to go politically, because the neoliberal ruling class of the last 30 years refuses to acknowledge their legitimate concerns. Hopefully cooler heads will prevail, and we in the western world will be able to have an open and legitimate debate about immigration and the right of a nation to decide who (and how many) people from starkly different cultures should be imported. Or the self-righteous can just keep calling them racists and bigots for voicing their legitimate concerns and stand by as the next wave of Trumps and Brexits roll across the western world.


Plenty of people would complain more loudly about Saudi sheikhs than anyone else.

That's not what happens at all. You've never been to Marbella, have you? I watched a TV show some months ago about how nicely they behave there. Huge tips, very discreet people, lavish mansions with many jobs for the service... Now in Spain there's a heated discussion about selling weapons to Arabia (about murder of Khasoggi) since our shipyards are a vital source of jobs near my hometown.


No, I haven't been to Marbella, but I've been all around the world, and the Saudis are among the most despised people globally (along with the Israelis).


The fact that Muslims aren't as integrated into European society as they should be is 99% the fault of Europeans. Basic stuff like labor market access is crucial for integration.


>The fact that Muslims aren't as integrated into European society as they should be is 99% the fault of Europeans.

You're blaming Europeans for not forcing their culture onto immigrants. You're also making the naive assumption that a sizable proportion of immigrants WANTS to assimilate - the truth is that certain groups are simply not interested in assimilating. Why would they be? Foreign customs are hard to learn and - here's the part many proponents of immigration conveniently miss - directly opposed to Western customs.

When you are raised to believe that women are property, for example, and that your customs and culture are biblically justified, and some white idiot welcomes you into his country, provides you with free (to you) government assistance, while you live comfortably in your homogeneous ghetto, don't be surprised if you get laughed it at best, taken advantage of at worst.

And when one side not only denies that any of this is happening, but worse, slanders their countrymen as "xenophobes" for not wanting to accommodate the culture of their home and Homeland for that of ungrateful immigrants, populism suddenly has a very strong appeal.

In the U.S., for example, immigrants vote Democrat. You have millions of people who have called the U.S. their home for generations, and now you have a large group of people flowing into your communities and imposing their cultural values onto you via government. This is not xenophobia - people have a right to preserve their peaceful ways of life - otherwise you are arguing that one culture is superior to another.


> You have millions of people who have called the U.S. their home for generations, and now you have a large group of people flowing into your communities and imposing their cultural values onto you via government.

I assume you must be talking about native americans and the large influx of white people who won't assimilate to their local Ute or Navajo or whatever culture. Because literally any other interpretation I have of this comment is less charitable because it would imply you have a staggering level of hypocrisy.


What happened to the Native Americans happened more than a century ago. And, case in point, how well did it work out for them?

We live here now. Our families, our friends, our communities. We have every right to self preservation that they did-there is nothing hypocritical here.

The comparison is disingeneous. Ignoring the fact that most natives were wiped out by disease, European arrival in the U.S. was a clash of totally independent groups. There was no forced integration and little intermingling of communities-this was warfare, not simple immigration.

It wasn't right (through the lens of our Western culture) to uproot natives who had built their lives here, but that doesn't justify doing the same to generations who've similarly replaced them. In any case, land is finite, all current peoples have displaced others at some point. That doesn't mean their communities and countries suddenly have no right to cultural self preservation.


Except right wing populists aren't satisfied with bashing against those not willing to immigrate. The Bavarian government makes cultural assimilation as hard as possible (well, "made", at least actually. That topic died down in the media recently). Like forcing them into central camps instead of spreading them out over the country. Or deporting even those with a job and lots of backing in their newfound community. Since being hard on immigrants the kind of pandering their voters want. They willingly accept worse outcomes for all of us.


What is the justification for forcing these people to accept foreigners into their communities?

And I have no reason to believe that your Bavarian example is representative of other populist agendas. Limited immigration does not mean no immigration.


The justification is people with wealth and power have a responsibility to do good things for their fellow human beings without.


I agree that's a problem, but to fault Europeans is about as stupid as faulting Muslims for not figuring out how to make new jobs. Even the "worst" immigrants into the US (Mexicans) can find (shitty) jobs (the "best" immigrants - Asians - are a success story in all categories so they're usually ignored in these discussions).

But the obvious solutions to this problem, regardless of who's fault it is, is to not import people if you can't offer them jobs/health/education/language courses.


Then offer them the courses then! The EU and the US already do this! And besides, what is the benefit of keeping out potential businesspeople(Sergi was a immigrant and Andy Grove was a refugee), doctors, programmers, and more while birth rates and education stagnate?


We do! But resources aren't infinite, so there must be limits.


We could afford all of these things, but for inefficiencies in the system.

Calling human individuals with similar heritages by monikers of worst and best is a gross overgeneralization.

The obvious solution is, there isn't one.


> gross overgeneralization

Yeah I know that's why I put it in quotes. But that's how they're commonly perceived in media and academic institutions.


How is that a justification for reiterating the falsehood that one nationality is inherently inferior to others? I prefer to not use hateful rhetoric even if the "media" does it.


I suppose it is possible that many people find a system with some inefficiency to be practically nicer, easier, and more functional than one with absolutely perfect utilization, allocation, and efficiency.

efficiency in a systems problem is almost never easy, and it requires everyone who participates to put forth their fullest effort. It is telling to me that when people are unconstrained by space/cost/time, they don't tend to create extremely efficient situations, but in fact prefer those that intentionally less so.


Individuals with power absolutely prefer an inefficient system. Arbitrage and gatekeeping ensure their power in the future.


That happens because the US provides people that come here for humanitarian reasons work permits. Europe does not.


Also their "desire for education" could play a role somehow. In french migrants suburbs the scools are battlefield because the parents dont teach them basic discipline.


Well ghetto school always have that because they get less funding and their is less teacher and more work for teachers (who are already exhausted)! Plus lead and PTSD!


7 years ago, I worked on a government contract (not S,TS) trying to build a lie detector using computer vision and the research done by Paul Ekman (from the book Blink). After a few years of building models, we determined that it would be impossible to build an effective detector. I wonder if the research has changed.


Perhaps they use machine learning for this. Was that used in your project?


Ah, yes, the magical, "Machine Learning works even when there is no proper way to determine the hygiene of the training data," argument.


Throw some "cloud blockchain" in the mix and it should be ready for immediate global deployment. I can almost taste the synergy.


OT: a friend & former coworker and I have a pact that's been running for a long time: whenever you hear the word "synergy" you are required to make a hand gesture sliding your index finger on one hand into a ring formed by the other hand.

It's crass and juvenile and totally inappropriate but after more than ten years it's also second nature and still represents the gist of what the synergizer's plans are likely to accomplish.


Do you believe using machine learning would make a product more likely, or less likely to be viable for deployment in the field, in an area of such importance as border control?


Neither, I was just curious. Machine learning is a technique that is, at least attempted, to be used in a lot of scenarios nowadays. Something like this might sound suitable.


yes we used a combination PCA for feature extraction and then SVMs and logistic regression for classification.


Translation for those unfamiliar: they used machine learning. And while SVM is not "advanced enough" compared to "deep learning"[0], it's a very powerfull classificator/estimator on its own.

To OP: maybe today with a huge GPU, lots of data and more complex models you could get something going. Do you think a more advanced model could really pull this through? Maybe using other features -- perhaps coupling with a thermal camera can help detect changes in temperature that could signal the person is lying.

[0] quotes because I've heard this complaint a few times already, and it's usually from people that have bought the buzzword but usually don't understand much about machine learning in general.


I havn’t been following this story really closely today, but here is my two sense (albeit outdated):

In the research I was working on, there were two types of deception/lies: high stakes (aka suicide bomber or shoe bomber) and low states (you have a pound of cocaine strapped to your leg). If you want to build a lie detector on facial data, you need consider things like

1. It is important to have a robust data set that contains lots of example of both for training and cross-validation

2. False positives can be very expensive (because you start pulling people aside, lines build up, people get pissed, etc)

The project I worked on was trying to solve a specific problem: replace Behavior detection officers [1]: These are people that sit inconspicuously at the TSA checkpoints and decide who gets to get pulled aside for further screening and who goes through (I do know anything else about what they do and where they are, numbers etc.) Their training is advanced, laborious, and expensive for TSA. They are, of course, trying to catch terrorists (aka the high stakes lies) but in reality they are dealing with the low stakes lies. The only terrorist I can think of since 9/11 that got close to taking down a plane was Richard Reid, and they missed him so they seem to be 0/1 in high states lies).

Anyways, the problem with building their replacement is we do not have a lot of good examples of high stakes lies in video data. There are some (Scott Peterson, etc) but not enough. Also, the models they were built off of were "micro-expression" [2] detectors, and micro-expressions show up differently across cultures (maybe the consensus has changed since I was modeling this stuff) . So, in short, the project I was on failed because the data sucked and even if we did build a model, it was overfit to the data we trained it on. I am not sure how deep learning and more advanced GPUs would solve that. Maybe they found better data, but I would be skeptical. I honestly would be surprised if they actually did much better than random (ROC > .5)

Sorry for the rambling - end of the day for me on the east coast.

[1] https://www.tsa.gov/news/testimony/2013/11/14/tsa-behavior-d...

[2] https://en.wikipedia.org/wiki/Microexpression


Thanks for the answer! I was thinking more along the lines of data augmentation. All in all, I don't think microexpressions alone are a good feature, from your description it's clear that they're problematic on a number of fronts.

The article also mentions microexpressions as the basis for this new system, and that they worked on a slow universe of 30 people, with ~76% accuracy. What remains to be seen is if the new, larger deployment will work better. Also, they may be working more focused on the low stakes lies rather than the high stakes.

What I was thinking that "more compute power" could bring to the table would be deeper and more complex models, faster training algorithms[0] and data augmentation. But then again, all that relies on having good data to begin with.

[0] I worked with neural nets in the past (~10 years, I pretty much left the field right before the deep learning big bang), and coming back to the field I'm pretty much amazed at how training a 7 layer deep MLP used to take a few minutes on CPU back in 2010, and now can be done in seconds (still on CPU). And while there's a part of microarchitectural improvement to account for, that could at best be a 5x speedup -- the rest is owed to the newer training algorithms.

Once again, thanks for your answer!


"iBorderCtrl team, said that they are “quite confident” they can bring the accuracy rate up to 85 percent."

Magical 85 percent accuracy. so it is basically toy.


So what like one in six or seven people will be incorrectly flagged as lying? That's just silly, it'd be a total circus trying to use it.


Well, in particular, suppose 1 out of 100 people is lying at the border. Send 10k people over the border, 100 are lying, 9900 aren't.

With 85% accuracy, 85 of liars are flagged as lying (correctly), and 15% x 9900 = 1485 of non-liars are flagged as lying (incorrectly).

Thus, a bit more than 5% of people flagged as lying are actually lying, while nearly 95% of people flagged are innocent. This is not even taking into account the possibility that hardened criminals might be less nervous than somewhat anxious normal people.

Enjoy your border crossings, everyone.

EDIT: fix italics

EDIT to add: And that's after they get the accuracy up to 85%. And unless accuracy is defined somewhat differently.


> With 85% accuracy, 85 of liars are flagged as lying (correctly), [...]

This is not what accuracy means.

85% accuracy just means that 85% of all the decisions the system makes are correct. A system in such a setting, where a single false negative matters a lot more than a single false positive (which would simply be handed over to a human for further investigation) would necessarily be tuned for extremely high recall at the cost of precision. In other words, it would often flag innocent people for further investigation (as you've said), but it would almost never clear people that should've been flagged.


I am not correcting you, but simply illustrating how 85% accuracy tells us very little...

Let's make the spherical cow approximation that "a lie" is a fully defined concept, then we have 4 conditional (bayesian) probabilities:

P( "sincere" | sincere) The probability a sincere person is reported as "sincere".

P( "lying" | sincere) The probability a sincere person is reported as "lying".

P( "sincere" | lying) The probability a lying person is reported as "sincere".

P( "lying" | lying) The probability a lying person is reported as "lying".

The first 2 probabilities should sum to 1, and the latter 2 possibilities too, so we have 4-2 = 2 degrees of freedom. A reported "accuracy" tells us nothing without knowing the distribution of liars and sincere people in the test group..


Ah, yes, you're right. Thanks for correcting my misunderstanding.

https://en.wikipedia.org/wiki/Accuracy_and_precision#In_bina...


Ah, well, yes, looks like I was right, too, though.

Unless I'm mistaken (and that's possible, I've changed my opinion twice now), my example outlined above is

- conceivable, and

- has 85% accuracy (85 people correctly identified as liars, 85% x 9900 = 8415 correctly identified as non-liars, thus a total of 85+8415=8500 of 10k total "accurately" identified), and

- still only 5% or 6% of flagged liars are actual liars.

EDIT to add:

And if the system is tweaked as you suggest, to very rarely fail to flag a liar:

- suppose it correctly flags all 100 liars as liars

- suppose accuracy is still 85%, thus 8500 people in total classified correctly

- thus 8400 non-liars flagged correctly, and the remaining 1500 non-liars flagged incorrectly

Now still only 6.25% (100 of 1600) of people flagged as liars are actually liars. Thus, even with the tuning you suggest, this remains.

(Note to self: 1. think 2. write)


FWIW, I think you are totally correct if you take accuracy at face value.

You really have to compare precision and recall values to know if the accuracy statement holds true. You could have have 100% precision and low recall and still have 85% accuracy (meaning you could never flag someone as lying and be wrong while missing a bunch of liars and still have 85% accuracy).

but if everything is totally evenly distributed, then 85% accuracy means 85% accuracy and your first statement is correct.

The real issue is that accuracy is only one piece of the puzzle.


Worst part is (in my view) that they wont be known to be "incorrectly flagged" until after a human has dealt with them. Initially they will just be a "flagged" person (incorrectly or not) and so the immediate bias (unconcious or otherwise) for the human operator will be "this person is a liar".

This is going to be awful - imagine all the people who are anxious anyway, perhaps dont speak the langauge very well, get confused by the questions etc. There are going to be a lot of people who will get "enhanced screening" and generally treated like a criminal for no other reason than "the computer said you are a liar".

Awful.


The implication in the story is that if you are flagged as lying, you go to a human agent, if you're flagged as being clear you're waved through.

So it maybe that the bigger problem is the false negatives.


It's worse than that. The current system was tested on 30 people with 50% base rate of lying.

Even if they bring it up to 85% accuracy with 50% base rate, by the time you are dealing with base rates that are more realistic, you're going to run into way more problems than just 1 out of every 7 people.


Forget accuracy, I want to see the full confusion matrix...


So will it let through 15% of liars or stop 15% of those telling the truth?


Yes


What the hell? This is completely unacceptable, who voted this in? I'm not so much concerned about this system, which is a complete joke and is never going to work, as much as I am concerned by the fact that some people thought this was acceptable enough to actually deploy.


Looks like the commission is running this as a pilot.

As always your contact will be via your MEP, who appoints the commission president (based on the fact the EPP was the largest party in 2014 and Junker was the presidential candidate of that party)


And also, don't forget to register yourself on the voter lists for the EU's 2019 election!

The countries in which the pilot takes place were mainly EPP (odd correlation...?)

By the way, direct links to newpress release and project page, which lists involved countries :

- http://ec.europa.eu/research/infocentre/article_en.cfm?artid...

- https://cordis.europa.eu/project/rcn/202703_en.html


This is outrageous. I always feel extremely anxious (and in such a case I would also feel angry) and try to imitate I'm not by playing Mr. Spock + smile when passing border (or any other) checks although I'm neither a smuggler nor a terrorist (but have some impostor syndrome). Such a device will probably notice this and put me in problems.


It also has a built-in bias: the more you are falsely accused, the more stressed you become when you are treated as a suspect.


that's not bias, that's a self-fullfilling-prophecy runaway effect, even worse!


I agree. It is a high anxiety-producing situation, even though I'm doing nothing wrong. I travel yearly to other countries and I still get very stressed.


I would imagine crosssing a border is the only interaction most people ever have with law enforcement.


Or at least the only adversarial interaction.


I don't think I've ever spoken to an on-duty police officer in my life.


Crossing the border isn't adversarial. A speeding ticket isn't adversarial (usually, unless you make it that way). They're just routine transactions.

A "driving while X", a terry stop or other fishing expedition type stop is adversarial.

Occasionally we have comment threads on HN that remind me how far removed a lot people here are from these issues and it does not make me confident that there will be improvement in my lifetime. This comment thread is one of them. I don't support this AI thing at all but as far as problems with law enforcement go border agent's demeanor should be really far down the priority list. I'm all for things that put the issue on people's radar and I don't wanna come off as saying "but X is worse so we shouldn't care about Y" but this seems like a first world problem.


> Crossing the border isn't adversarial

Without naming countries, you are handled like a criminal very often.

Your definition of 'not adversarial' is intriguing.

> as far as problems with law enforcement go border agent's demeanor should be really far down the priority list

Oh really? When border agents can seriously screw up one's life (for instance, by placing a decade ban, or detaining them, and worse) with no oversight, I'd say it's a pretty big problem.

EDIT: > They're just routine transactions.

People have been killed in 'routine enforcement stops'. Agent demeanor is a big deal.


> A speeding ticket isn't adversarial (usually, unless you make it that way). They're just routine transactions.

Thinking about being stopped by the police for speeding as being something routine is insane. You should't be routinely doing something dangerous and illegal (speeding is illegal per se in most states.)


Minor motor vehicle infractions are routine transaction from the point of view of the cop just like crossing the border isn't routine for most people but the customs agent sees hundred or thousands every day.

>You should't be routinely doing something dangerous and illegal (speeding is illegal per se in most states.)

You should read up on what the academics and engineers have to say about speed limits instead of pearl clutching. Unrealistically low speed limits are worse than no speed limits, often times there's politics and road metric gaming involved in setting a speed limit so you often wind up with speed limits that are unreasonably low. You can't just pick a limit and expect people to follow it if it's unreasonably low.


> You can't just pick a limit and expect people to follow it if it's unreasonably low.

...why not?


> You can't just pick a limit and expect people to follow it if it's unreasonably low.

I don't see why not - I manage it fine, have never had a speeding ticket, and don't see why other people can't do the same.


Four lane divided, limited-access highway. Lanes are 12' wide. Visibility is unlimited. Road is straight as an arrow out to the horizon. Terrain is flat. Diffuse daylight. No appreciable traffic.

Without being told the speed limit, how fast do you go?

Now that you're going that fast, you enter a section of highway that has been annexed by a nearby incorporated town, its speed limit in town lowered to 35 mph, and its only cop sitting out on the highway, clocking cars and looking for speeders to cite for "70/35".

Without the threat of cops, most drivers generally drive at the subjective and situationally variable speed described by Adams in HHGttG as "R1", which is the maximum reasonable and prudent speed for the ambient conditions. When lanes narrow, R1 drops. When fog blankets the road, R1 drops. When traffic gets congested, R1 drops. Everyone already has ample incentives to drive at a safe speed to protect their affordable insurance premiums, their expensive vehicles, and their priceless lives.

In general, the locales that recognize this set their posted speed limits to the 85th percentile of driver speeds in normal conditions. They only ticket people that are clearly beyond the community norms for R1. The locales that want ticket revenue set the speed limits to R0.9 or less, so that everyone is occasionally subject to citations if they aren't extremely vigilant about avoiding the enforcement.

The latter is why not. When there is profit in catching law-breakers, the law will be adjusted to criminalize otherwise normal behaviors.


> Without being told the speed limit, how fast do you go?

If none is posted, then whatever the national or state speed limit is.

> The locales that want ticket revenue set the speed limits to R0.9 or less, so that everyone is occasionally subject to citations if they aren't extremely vigilant about avoiding the enforcement.

So the only people who aren't caught are those that are carefully observing the speed limit? Sounds good to me. These supposedly revenue-seeking police departments never bother me.


Same here. I don't want to give them the satisfaction. Nevertheless, it is annoying to drive 35 mph on a limited access highway that clearly supports 60 mph.

There are plenty of people willing to actually drive R1, which would be 60 mph, if there weren't any cars poking along at just under the speed limit, trying to avoid giving the cops their pretext for a highway robbery. Setting the limit too low makes the road less safe, because it increases the range of speeds at which the cars are moving.

It is small consolation to see literally half a dozen other cars pulled over on a 5 mile stretch of road-that-should-be-highway, because the town certainly could save money by firing at least three of their cops, and possibly the whole department. There's no excuse for that kind of official depredation. And yet, I don't live there, so I can't vote for them.

It is no sin to break the law when the law is turned against the people, but I am reluctant to pick fights that other people would have to join in order to win. I'd rather enact a boycott on that particular town's businesses--and any on the other side of them--until they clean up their own mess.


As if getting through an airport isn't bad enough, now in these locations we will have to submit to a pseudoscientific farce. (Polygraphs, that record more detailed physiological data, in concert with a human interpreter, fail to "detect deceit". Clearly this system, which the article says was tested on 30 people before deployment, will produce random results.)


If we could make politicians have to pass an AI lie detector we build first, then I am for this. Firstly this will never happen, and secondly if for some reason it did we would then likely have no politicians.


That might inadvertently select for politicians who believe what the voters want to hear over those who lie but know its a lie. Since the later want to get re-elected they'll quietly abandon the good-sounding-but-horrible plans they talked about to get into office but the former will actually try to carry them out.


Significant note: this appears to be a customs screening, not immigration screening.

Which makes perfect sense since it's extremely hard for customs agents to actually catch smugglers. (Compared to immigration screening which is largely based on hard criteria which human agents evaluate with much less leeway.)

The bigger question is whether it's effective - but anything is likely to be more effective than customs agents selecting people to search based on gut feeling.


"which cost the EU a little more than $5 million"

Well, I didn't know my taxes were used to build such systems. It would have been really more acceptable if they sold the system as a simple chat bot that records what you says, in case you are involved in a case later. But the "lie detector" part is scary and this is the typical instance that Elon Musk warned against.

I don't understand how academics accepted to work on that.


I wonder what the effect on the human border agents will be if this system is popularized - will it be something like "self-driving" cars where people start falling asleep at the wheel even though the tech isn't there yet (I suppose in this scenario it means that people who are lying are seen as telling the truth and they just get waved by by border control)


"Lie detection" is probably just the beginning. Think of what could be done by combining this system with facial recognition. Deployed widely enough, this system could build a personalized profile for each individual, mapping emotional state to times and locations, or more crudely it could be used for racial profiling.


Couldn’t we just use a random number generator instead?


Some places do exactly this.

I think the idea was that, with any kind of profiling, it would be easy for drug gangs to figure out who to use as mules. And with any judgement, they can pressurise the guy making the call. But a lottery machine everyone can see is harder to defeat.


Doesn’t GDPR have a lot to say about the use of algorithms for decision making? This sort of thing of letting an ‘AI’ decide on its own what kind of airport experience you’re going to have seems to fly into the face of that.


I don't think GDPR would prevent this I read though it for work before it was implemented I don't recall anything saying algorithms can't make decisions about people. Our company uses algorithms to decide what ad is displayed to people, and I'm not recalling anything that would make these diffrent under the law.

Furthermore, I believe the government has been given broad exceptions from GDPR for anything related to doing government stuff. This is form of legitimate interest "processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller;" It is in the public interest the border control laws be enforced and a manner that is both effective and minimized cost to the public, also this involves the exercise of official border control authority, therefore this is acceptable.

(Standard reminder that if you are relying on the advice of a stranger for legal issues that matter you should get an attorney.)


I found this:

https://ico.org.uk/for-organisations/guide-to-the-general-da...

It is interesting that there is a comment about explicit consent - I imagine this will be like the millimeter wave scanners: you can either go in and get your genitals imaged by some remote voyeur, or if you dont like that then you can get a thorough strip search by someone in blue gloves. Have a nice day :-)


Just like the evil TSA imaging systems, best thing to do is refuse to participate.


I imagine it will not be optional when it is out of the experimental phase.


What can they do? Strap you in front of it like it’s clockwork orange? You can refuse to respond to it. Cover your face with your hand and say “operator operator operator” or some other nonsense. Say it’s creepy or ridiculous to talk to such a thing. They will let you go.


If it's mandatory if you want to enter the EU (as a non-citizen), I'd guess they would deny you entry.


Nope. It's for customs, not immigration. You're already in the country, they just need to figure out whether they need to search your bags for contraband.


So then if you refuse to answer they will just search your bags.


Customs agents are supposed to do that. How else do you think smuggling gets stopped? It's pretty much universal that your belongings can be searched (then there are some countries expand that to searching digital data, but that's a bit of a perversion).


This is being used by literal customs agents.

That's the point. If you refuse then they do a thorough search.

If you answer suspiciously, or refuse to answer, then you get the full search.

This is similar to how you can refuse body scans at the airport, but if you do then you get the full pat down search.


The purpose of the process is to decide whether they need to search your bags for contraband.

If you act very weirdly, like you are suggesting, then they will just make the decision to definitely search your bags.

Targeting people who act weirdly is kinda the entire point, though.


The electronic strip search machines are still optional; they'll (eventually) give you a pat down instead.


There's a lot of skepticism in this thread. I've actually researched this a bit and there is research suggesting AI-based lie detection to be possible. It will need a multimodal approach where it's more than just video images though.

Also, the use case of border control is a great application. There's a lot of misunderstanding in this thread. The AI screening is just a first screening, and if someone fails that, then they go to a human. So moderate false positive rates are acceptable.

The commentators on this thread generally make a few mistakes: assuming lie detectors won't work because polygraphs don't work, assuming the border control use case needs to be perfect or not have false positives (it doesn't), assuming Paul Ekman's microexpressions are all that iBorderCtrl is basing their research on (I agree Ekman's research is questionable, and I don't know what exactly iBorderCtrl is doing, but it seems highly likely they're doing more than just looking for microexpressions), assuming racist intent or that it'll just flag non-Europeans as liars.


> there is research suggesting AI-based lie detection to be possible

[citation needed]


Yes, of course there is skepticism in the thread. The claim being made is completely ridiculous.

I would be very surprised if the "research suggesting AI-based lie detection to be possible" that you mention is from anyone who has any sort of reputation to protect. Machine learning scientists, the vast majority of, would not touch such obvious pseudo-scientific claptrap with a ten-foot pole. It's the kind of thing that tarnishes one's reputation and never washes off. And rightly so.

I would not welcome, but grudingly accept, your references to the contrary.

Also, if iBorderCtrl are not using Eckman's work, then what kind of theoretical framework are they basing their work on? Why is it "highlly likely they're doing more than just looking for microexpressions"? Where is all the science of detecting lies from looking at peoples' faces?

And if they're not basing their work on someone's research, then what are they basing it on?


Well, it turns out, they are using microexpressions - and nothing else:

The IBORDERCTRL system has been set up so that travellers will use an online application to upload pictures of their passport, visa and proof of funds, then use a webcam to answer questions from a computer-animated border guard, personalised to the traveller’s gender, ethnicity and language. The unique approach to ‘deception detection’ analyses the micro-expressions of travellers to figure out if the interviewee is lying.

From the Commision's website, posted here by another user:

http://ec.europa.eu/research/infocentre/article_en.cfm?artid...


I always feel a bit sorry for the rubber stamp monkeys sitting at the border control in the US. They are so fucked at so many levels that isn't funny any more. For starters, their job is purely symbolic as it is unlikely anyone that shouldn't be there would get that far: it would imply multiple levels of failure in several agencies: you traveled internationally and were profiled, checked, and triple checked before you even arrived at your gate. They are bureaucratic ass coverage with little practical real life value. What little value there is has more to do with controlling migration than any security.

Domestically, many airlines already have automated checkins (including luggage dropoff in some places), automated boarding, and in some places automated passport checks at immigration. The last human hurdle is security screening, which is mostly theater at this point.

So, automated screening of travelers at the beginning of their journey makes a lot of sense. Mostly it's just confirming the obvious: are they who they claim they are (i.e. does the person showing up match the automated profile available already, are there any red flags warranting extra attention) and are they carrying anything they should not be carrying. Like AI already outperforming physicians in the job of scanning medical images for anomalies, surely state of the art luggage scanners are also outperforming humans (probably by magnitudes). Add to that some screening for clear markers that somebody's behavior is a bit off and you have basically automated away security staff likely to miss those signals because they are human beings that get tired, stressed, bored, distracted, biased, etc.

So the combined checks of identity, profiling, automated luggage scans and escalation to humans in case of any doubt should be vastly more efficient. The default case should be zero humans involved with the whole process. When it escalates, you still get the humans in the loop that are then a lot more effective because they already know there were some red flags.

This stuff will initially perform quite poorly probably. But it will still be worth it in identifying the "definitely not lying, don't waste your time on this" category of travelers.


" But it will still be worth it in identifying the "definitely not lying, don't waste your time on this" category of travelers."

That is absolutely not how any of this works, at all.


Anyone who ever dealt with a chat bot in customer support should be alarmed by this.


In general respect to human rights and privacy has always seemed an important part of the EU ideology. I just hope this a purely local initiative and the central EU government is going to intervene and stop this.


>The virtual agent is reportedly customized according to the traveler’s gender, ethnicity, and language.

That all just seems to be fancy way of saying it's discriminatory and racist.


In that case a system programmed not to recommend vaginal checkups to men is sexist? I can totally imagine that different genders lie differently, and if it turns out that they don't (or that different ethnicities don't or whatever), great, one model fewer to maintain right?


Seems like the perfect strawman for recording face recognition data use in surveillance...


Border checkpoints in the Schengen zone, which is most of the EU, are forbidden.


Reality check: tell that to the German border police who are standing on the A93 motorway at Kiefersfelden eyeballing every single driver entering Germany from Austria.

There's more than a whiff of Checkpoint Charlie about the place (portakabins installed directly on what was the motorway surface, concrete traffic calming measures, 5 km/h speed limit, armed police, more police sitting in chase car should anyone decide not to stop, floodlit at night)

I'm sure it's worth it, what with Austria and Germany sharing a fairly long land border which is more or less completely unsecured. Side roads - of which there are plenty - don't get checked much either. <rolls eyes>


I wonder what its cost function is. I sure hope it's better than "get people to say inconsistent things", as I'd expect that to produce a system that is optimized to trip people up.


We need the US embassies to use this. I have been denied visas three times because the interviewers feel I would not return to my home country. If this AI detector was used, at least they would know I am honest about just visiting the USA.


AI can never be better than the training data you feed it. If videos of your interviews where used to train on how a 'liar' might behave, it will make the same mistakes as the human interviewers.



OBEY





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: