Hacker News new | past | comments | ask | show | jobs | submit login

Harsh blocking/limiting/challenging is way too valuable to sites that are actually trying to make money online. It's not going away short of legislation banning it. Losing 1/10,000 legitimate customers to cut fraud attempts, spam, exploit attempts, and so on, by 90% or more, is just too good a trade-off.

I have bad news about the most-likely fix for it, longer term, so we can lay off the IP-based reputation stuff and the geo-blocking: it's tying some form of personal ID to your browsing activity, so that bears the reputation instead of the address.

Sorry. Said it was bad news.




An alternative that preserves some privacy also doesn't seem that hard to imagine... though it probably has its own can of worms*.

Basically, the core problem is digital identities (accounts, IPs, phone #s etc.) are cheap to create (even considering captchas and all) so fraud is easy. The solution could be just to make it "costly" to create new digital identities. For example, you could get a "verified but anonymous" identity issued by locking some assets (could be real world money, or maybe something intangible like community reputation) as collateral with a trusted party (or, for the crypto people, the blockchain). If you misbehave, you lose your reputation on that identity (and essentially your collateral) and have to start over. This lets anyone bootstrap a "minimal" level of trust at the beginning before they can use time to prove themselves trustworthy.

Note: This model might remind some of things like staking in crypto. However the idea is really not anything new... Putting money on the line is really how most low-trust bootstrapping happens.

*: To name a few:(1) this can result in participation being gated by wealth, which can be unfair. (2) it makes accounts more valuable to hack so people need better security practices [re: twitter checkmark]. (3) one would need some authority to decide how accounts lose their collateral or maybe the collateral is just burned to create that initial credibility...


> Basically, the core problem is digital identities [...] are cheap to create [...] so fraud is easy. The solution could be just to make it "costly" to create new digital identities.

We already use this model in practice. It's why so many services require a phone number verification now - they are hard enough to get en-masse, especially if you block things like Google Voice. They even have a big advantage in that they are comparatively hard to hack, as the SIM card is effectively a weak form of physical security key.

I think the big problems this causes is discussed on HN quite often.


Your idea is comes from a good place, but identity theft is already a thing in the real world. Digital identities would also be very stealable. This malware more harmful in the long term. Imagine if your Twitter gets hacked and your digital identity makes it so your Gmail gets blocked.

Similar, the internet is already very difficult for the people with limited means. This would make it even harder.


Easy solution.

Go down to your local post office.

They physically hand you an identity token on a physical $2 2fa device if you give some evidence you live nearby. You can put down the deposit or hand over the device for an old id which is cleared and reused.

It's traceable to the post office but no further, nothing is recorded other than that the token is deployed and roughly when.

Local communities can be responsible for cleaning up local messes. No need for the scammers two cities over to effect your reputation. No need for a corrupt employee handing out tokens to effect the reputation of the token you got ten years ago.


So every country in the world should simultaneously roll out this $2 2FA token?

And the governments of the world are going to do this is an anonymous way?

Who is going to manufacture these 8 billion (Or at least 3 billion if we only count Facebook's MAU) tokens?

And there still needs to be a global database of valid identifiers, else anyone could just create a software token that they can reprogram ever second.

And we expect all people to carry these 2FA tokens perfectly?

And what happens when someone looses this token? The post office has way to prove you owned that token in your proposal.

Same thing for revoking a token. There is no identity out of the token, so how do you revoke it after it is lost? People are not willing to store a piece of paper in a security deposit box.

This "easy" solution is impossible in practice.


You're projecting use cases that weren't proposed.

The only purpose is to provide evidence of not being a bot. Not to log in or verify identity. You don't need a server or proof that a particular token is owned by a particular person, just a cert chain and a list of postcodes with current public keys. The post office has a private key. They sign a message saying 'the holder of this token walked into the store'. Let servers make whatever judgements they wish about the chain's credibility. If a particular key signs lots of bots then you know where to look for the source of the bot farm and the people that live there know where to look to fix their reputation.

It doesn't need to roll out simultaneously. Just be an alternative to captcha that isn't as abusive as device attestation.

The manufacturers will be the same ones that manufacture the hundreds of billions of usb drives and phones and smart light bulbs.

The only problems are it's not as useful for abusing users or spying on citizens as revoking access to general purpose computing, and idiots who project problems onto it that come from use cases that are not proposed or say 'big number make thing impossible'.


It's not really my idea (has been proposed multiple times long ago to the point it has even been implemented in many places).

As for identity theft, it's actually not that common/easy except in the US (which has no centralized national ID issuer and largely depends on hacks building on the SSN).

An besides, protecting digital identity is already important even without this bootstrapping.

> Imagine if your Twitter gets hacked and your digital identity makes it so your Gmail gets blocked.

Flip the services around and you have the reality of today.


> Basically, the core problem is digital identities (accounts, IPs, phone #s etc.) are cheap to create (even considering captchas and all) so fraud is easy. The solution could be just to make it "costly" to create new digital identities. For example, you could get a "verified but anonymous" identity issued by locking some assets (could be real world money, or maybe something intangible like community reputation) as collateral with a trusted party (or, for the crypto people, the blockchain). If you misbehave, you lose your reputation on that identity (and essentially your collateral) and have to start over. This lets anyone bootstrap a "minimal" level of trust at the beginning before they can use time to prove themselves trustworthy.

I've always thought that client certs would be an interesting solution to this problem. Any given certificate can carry signatures from multiple signing authorities, right? So we could imagine a world where there are many different certificate authorities, each of whom have their own criteria for signing a particular certificate and each of whom offer different varieties of assurance regarding the signature-holder's identity.

From here, the question of "should I allow the user identified by this client cert to use my service" simply becomes a question of 1.) checking the validity of the signatures of the client cert and 2.) deciding if the CA's criteria for signing certs aligns with my desired userbase.

For example, a particular CA might insist that their users go through some real-world process to renew their certification every few years, but when they sign a cert it means that the bearer has been strongly vetted as a real person.

An interesting side effect of this auth model is that a service provider accepting certs from a particular CA has someone to complain to if a user bearing their signature acts improperly on their platform. You could imagine a CA which has a code of conduct expected of the users whose certs they sign, and would perhaps revoke a user's certification if too many websites complain.


That's not safe for a lot of sites, though.

I hear that porn tends to be officially frowned on in a fair number of places.

Reading non-approved news is dangerous in some places.

Honestly debating political topics can be super dangerous if you're identifiable.

Sometimes even having a login on a site is dangerous, I think I heard about this after a non-mainstream discussion site got hacked like a hear and a half ago.


So, my thought process here was: given a fairly robust selection of certificate authorities, a given user could have a number of different client certs for use in different trust scenarios. Contrast the following:

- A user bearing a client cert with the name "Jonathan Grant", signed by a U.S. government agency which is known to verify that its signees' certs are a citizen of the United States.

- A user bearing a client cert with the name "Michael Black", signed by Alice, who is known to only sign certs after verifying that the real-world identity of the signee matches the name on the cert.

- A user bearing a client cert with the pseudonym "c00ln4m3", signed by Bob, who is known to only sign a single cert for any given real-world person. (To do so, he verifies the person's real-world identity but does not reveal which cert corresponds to which person.)

- A user bearing a client cert with the pseudonym "hunter217", signed by Charlie, who is known to sign certs without verifying the real-world identity of his signees at all, but who is also known to revoke his signature on certificates if a service provider complains about the user bearing that cert.

- A user bearing a client cert with the pseudonym "cypr3ss", signed by David, who is known to charge $1000/year for a cert bearing his signature but performs no other identity verification.

The point of listing out these different scenarios is that the underlying mechanism (client certs) is the same, but the end-user and the service provider don't actually have to trust each other: they only have to agree on a CA with mutually acceptable policies.


I think this is true. It also reminds me of one possible purpose of regulation and government, given the majority will usually be happy to throw any sort of minority under the bus for the "greater good."

This also reminds me of the anxiety of Google deciding to just ban my account for some reason. They can't be bothered to commit resources to making sure mistakes can be resolved. They don't care to lose a fleetingly small percentage of customers.

Not sure I have an answer. Just a thought.


> Harsh blocking/limiting/challenging is way too valuable to sites that are actually trying to make money online.

I'm not understanding the generalized sentiment here. How would, for example, a retailer benefit from this strategy? How does it protect their bottom line?

I can see how a particular kind of "facilitated user economy," such as games, gambling and promotional companies could benefit, but it doesn't seem that broadly applicable to what most people would consider a "mainstream" business.

> so we can lay off the IP-based reputation stuff and the geo-blocking: it's tying some form of personal ID to your browsing activity

And a new market for identity theft is born.

Also, as someone who serves content and geo blocks it, that's not up to me, that's up to the owner of the content or whoever happens to be licensing it for them. So, even if you sent me a picture of your government ID, it changes nothing.


> I'm not understanding the generalized sentiment here. How would, for example, a retailer benefit from this strategy? How does it protect their bottom line?

The amount of automated and apparently-manual attempted credit card fraud (and exploit attempts, for that matter) any halfway-prominent site with a CC form is subjected to is hard to appreciate if you've never seen it. It's a whole lot. They aren't even necessarily trying to buy what you have, but to validate that their stolen cards work. And they're quite busy. If too much of that gets through—really, any more than a very tiny amount of it gets through—you're gonna have an extremely bad time.

Various CC service providers like Stripe do provide tools to try to block those attempts, but defense in depth is usually a very good idea, including fairly aggressive firewall-level blocking.


> a retailer benefit from this strategy? How does it protect their bottom line?

A couple of examples I can think of is blocking bots from scraping their site for pricing and details and from resellers from buying up all of the stock (see sneakers, electronics, etc). The last example doesn't directly impact their bottom line, but it will make customers go elsewhere.


That's not a solution, it would be way worse. Companies would then make automated decisions and associate them with your personal ID, and spammers/DDOSers would be spending serious effort to hack their way to using the IDs of innocents. So rather than just your home network or whatever getting a sh*t reputation with no recourse, you would.


> it's tying some form of personal ID to your browsing activity

That wouldn't just be bad news, it would be disastrous news. It would immediately render the entirety of the web worthless to me.


How does having a personal ID tied to browsing activity help with spam? Are spammers not real people with IDs?


Of course, but the theory is it's restricting 1 real person to 1 account, versus 1 spammer creating 1,000 accounts via automation.

And once your spammer has been identified then that's them banned/removed, unable to sign up again.


What's to stop them from using fake IDs


Airports seem to be able to spot fake passports pretty reliably.


Try forcing 100% of online traffic through an airport security checkpoint.


Presumably there can be more than one, like real life airport checkpoints? What are you even trying to say?


Spammers typically implement bots to carry out tasks. I mean, technically at some point a spammer is a real person, but when you're automating tasks and using bots, it's not at the same scale.


So what happens when your ID gets hacked and reused for fraudulent activity?

Would you have to submit a dispute with the internet credit agencies? Maybe join a class action suit against the entity that leaked your ID so that they're forced to give you a year of free internet identity monitoring?


The same that happens now when somebody stills your identity and ruins your credit history. You'll have to live in a bureaucratic hell for the next couple of years. And yes, as a compensation, you'll get the $6.99 worth of services from the guilty party. If you win the class action suit, that is.


Exactly. Why on earth would we want to replicate such a terrible system online?

We should be reforming our current credit agency system, not empowering it with a new mandate of judging somebody's social or political creditworthiness.


Then you need to deal with levels of rate-limiting that are fine for individuals but make it not feasible for spammers.

Keeping with the cloudflare topic, if Cloudflare only permits you 10 requests per second (HTML + JS/images) that's still usable for web browsing, but someone running a cloud of hundreds of bots would be effectively shut down. Similarly with email, an individual probably doesn't need to send more than one email per 10 seconds but email spammers wouldn't find any ROI at that rate - business needs being different might necessitate a different registry or something in that case.


Nobody said it wouldn't suck. The only question is whether it sucks less than the alternatives.


If you have a better solution, I'm sure it would be very lucrative.


Looks like Cloudflare beat us to it.


They are already testing out digital IDs. Now link that to the social score... and make the browsers and the sites exchange these data on the background, and make frontend services providers refuse connections from non-supporting browsers as "bots"...


The other not-so-great approach is to act like a normal user. This stuff doesn't tend to happen to the average Joe who browses the WWW. It's when you're doing unusual (albeit harmless) things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: