This is like “Know Your Customer” (KYC) rules required by banks and financial companies. So many online providers ask for it, and it’s so stupidly easy to bypass through stolen documents it’s meaningless. I’ve even seen customers setup booths where they pay people in slums for their documents and go through a “live” KYC in-app process to generate more accounts.
> I’ve even seen customers setup booths where they pay people in slums for their documents
and here I am with my GPU's heating my apartment with Deep Fake scripts to pass Binance Fake-YC after buying FULLZ off of Empire when I could just be paying people in the tents outside of the ground floor apartments.
I always felt like I was getting out of touch after the guy was passing out 'Elitest Tech Scum' collectible pins at Zeitgeist, but I thought it was just ironic until this very moment.
Exactly. It's a big fallacy to think, "this security measure can be circumvented by a sufficiently capable and motivated attacker, therefore it is pointless."
Another huge aspect, and this is why EV certificates are not only a good thing, but should be forced on PKI, not deprecated out: It adds burden and expense to acting fraudulently which makes it more prohibitive to do at scale. If there's one thing to me that discredits security expertise, it's the suggestion that things like EV certs and other forms of KYC are wholly ineffective because it's technically possible to trick them.
In turn, it makes efforts to police fraud more effective, because the cost each time a fraudulent actor's credentials are burned higher.
This is actually a really good point for a different reason: if google can add verification that depends on other infrastructure/parties, this will shift the attack vector from just google to other players. So say they require some paperwork or organizational registration, now the badies focus on "how do we generate fake credentials at scale" - which means less work for google.
No EV is largely pointless because it relies on humans and it's fascinating how few people understand that.
For every single HTTPS transaction (and there may be dozens involved in even fairly mundane seeming activities) the browser is able to compare the SAN dnsName (or rarely ipAddress) to the host named in the HTTPS URL. It does this unblinkingly every single time, and if it fails then (in the best case) the transaction just fails entirely or (in the less good legacy case) there's some sort of "Oops, something bad happened, don't trust this" behaviour.
But whereas SAN dnsNames and ipAddresses are something a machine can compare to the host in a URL, the EV identity is something only humans have opinions about and humans don't want to make dozens of such decisions when they click on a funny video of a cat.
Is it OK that this funny cat video is from "Alphabet Inc." ? How about "XXVI Holdings Inc." ? Why is that OK but "Funny Cat Videos Ltd." isn't? How about "You Tube" of Austin, Texas, is that OK? How do I know? More importantly why is it suddenly my problem when the computer was previously able to get this stuff right?
One of the most obvious things to do if you suppose that well, any fool can get the DV certificate for realbank.example but that's fine because only the Real Bank can get an EV certificate for Real Bank and that'll protect you is this:
Mallory gets an account with Real Bank and watches protocol flow. They don't care about most actions but are very interested in login timing. Mallory obtains one of these certs for realbank.example but for an organisation name they control like "Mallory Inc."
Now Mallory MITMs a valuable customer of Real Bank. During login they passively pass back and forth every step until the POST where the customer's password and OTP code are supplied. For that POST Mallory interposes supplying that Mallory Inc. certificate. The browser has no idea what "Real Bank" is but it can see this is a realbank.example certificate, so that's fine, the password and OTP code are delivered to Mallory.
Probably this works seamlessly, and Mallory steals the customer's money with no evidence of how it happened.
BUT if the customer was really trying hard to obey this crazy "Check EV because that's secure" they will see this - but only when their page renders, which is after their password and OTP code were delivered to Mallory.
They get to excitedly tell their bank that they've detected a successful attack - after it worked. If they're lucky the bank might even give them the money back, but probably not because it looks exactly like they're committing fraud.
"Relying on humans" is the only way real security is ever going to work. Because humans are the people using computers, and the people who mistype the site in their browser or click on a malicious Google ad at the top of the page that "looks right". Anything that doesn't depend on humans is just as usable by malicious actors as real ones. Let's Encrypt may be just as happy to issue a microsfht.com cert as a microsoft.com one: An EV cert is going to be a lot more challenging to achieve, and even if you get a EV cert for microsfht, it's going to cost a lot more effort, which you have to start over when your site gets banned and you need to spin up micoshft.com and a cert for it.
Security based solely on automation will continue to fail and lead to exploit after exploit because it ignores the human factor, despite that being the primary place security breaks down. It's refreshing to see proof of identity requirements finally coming to ads, and hopefully it will lead to a change in understanding, that PKI is also useless without EV.
The honest truth that people seem to fail to understand is that security doesn't scale. The more you scale, the worse your security is, and that will always continue to be true. The more manual, the more humans required in a process, the safer it will be.
You've taken exactly the wrong lesson from this. It actually reminds me of the Southall Rail Crash. What we actually did after Southall was mandate Automatic Warning System for passenger trains. Faulty AWS? Train can't enter passenger service. Driver isn't paying attention? AWS brakes the train to a full stop. But what the unions wanted instead was to add more drivers. Sure, the unionised driver was inattentive, but if we have two, or three drivers in each train that'll be mitigated. Your reasoning is like theirs "Let's do the thing that failed even more until it works".
Security based on the automation works really well. How well? Google drove phishing of its employees to zero. Not just technical employees like my friends, but random sales people and other non-tech roles, because they were mandated to use Google's security that relies on automation and not a vague human judgement. They don't need to know why it's safer, they don't need to pay attention in a class, the automation doesn't care why they aren't supposed to give their Google credentials to "Oogle" or "Goggle" or "Gøøgle" it's just designed to not work when things don't match.
I'm not a Google employee, I'm just a user, let's walk through what happens to see how automation saves us every single time, resolutely and without fail.
I visit google.com which is really Google and I sign in. I am prompted to press the button on my Security Key (a physical object). Since I'm at google.com the Key will present credentials for google.com proving I've still got that key to google.com
Later I am fooled (maybe by a malicious ad) into visiting a site that is not google.com but I think it is, my adversary is very sophisticated and resourceful. The site looks 100% the same as the real one, but of course this is not google.com. It might be anything else except google.com, but for the sake of clarity let's say it's crooks.example
I try to sign in. The crooks have two options:
1. They claim to be google.com, which they aren't, the automation rejects this and they get an error, if they like they can present me with the error, but neither of us can do anything with it except say "Huh, that's an error".
2. They admit they are crooks.example, which is true. The Key happily gives them credentials for crooks.example, because that's who they are. But these credentials are useless for attacking my Google account, why did they bother getting them?
Notice there's no human judgement involved. This system is equally happy to present credentials to nazi-scumbags.example or cat-videos.example. But what it refuses to do is give the nazi-scumbags.example credentials to cat-videos.example or vice versa no matter how much the user is convinced it's fine. There's no "Are you sure?" dialog, there is no "Press OK to proceed" step, it just does not work.
An insistence that we should just add more humans, like at Southall, is simply motivated reasoning, and has no basis in the observed facts. Automation works. You should resort to human judgement when automation isn't an option, it should never be your first choice.
> Security based on the automation works really well. How well? Google drove phishing of its employees to zero.
It's funny Google solved the problem so well for itself, despite it's utter inability to do it for others. The challenge is preventing phishing of Google employees is a single domain problem. Google knows everything about Google.
But Google woefully fails to have a solution that even starts to work for consumer Gmail or other companies they export their services to. My Gmail account got a phishing email today from Google Forms about a transaction. Google didn't understand it was spam, it came straight from Google, but it was definitely a scam.
Another great example is Google Voice, the source of 9 out of every 10 spam calls I receive. I could write a single line filter that would block all of the spam calls: I'd block all calls from my Google Voice number's own area code (which is different from my own real area code). But Google doesn't give me the tools to do that, it uses it's own automated system, fails spectacularly, and my spam calls continue. Automation has failed because one competent human wasn't allowed or empowered to act.
Automation can get things right 95% of the time, but will never understand the other 5%. And the big problem is, Google refuses to adopt human judgment: It insists automation is good enough, and rarely allows you to reach a human at all, even in an appeals process. When Google's automation decides to cut you out of their system, when it fails to judge correctly, you're just gone, often with no recourse.