- The optimal amount of fraud a business/industry should accept is non-zero
The simple observation that the cost to prevent each marginal fraud attempt increases; the last 0.1% of fraud costs way too much to prevent compared to the first 99%. Obviously society would be better off if fraud didn't exist, but since it does the effort expended is only worth it up until when the marginal cost of prevention exceeds an acceptable threshold (when it starts to lose you money).
The optimal amount of fraud is still 0, but the optimal amount of fraud prevention lies somewhere on the margin.
This is why important transactions like banking have KYC checks, and buying a pair of sneakers don't.
I think you’re conflating the terms optimal and ideal. The ideal amount of fraud in society is zero. The optimal amount of fraud in society is not defined, because optimization problems are always subject to a set of constraints.
So then we may ask: “what is the optimal amount of fraud in society such that the costs of legislation, education, and enforcement do not exceed X% of GDP?” and that is a different question. You might also throw technology and R&D in there because new tools make it easier to investigate fraud. Of course new technologies also open up new possibilities for fraud, so this is a very complicated exercise. But I think it’s fair to say that given any reasonable constraints, the optimal amount of fraud is nonzero.
The way this is phrased, I expected to learn there was some benefit to a low amount of fraud, as such. There is not. There is a benefit to a high amount of trust, which necessitates accepting some amount of fraud.
The optimal amount of crime in a society is non-zero because a society with zero crime would be a dystopian police state where innocent people sometimes get caught up in the justice system's net to make sure it catches all of the criminals.
The classic principle of Anglosphere common law is that its better to let 10 criminals get away with it than to convict 1 innocent person. The same idea applies to fraud because overzealous fraud prevention causes problems for legitimate users whose actions incorrectly get detected as possible fraud. The benefit to tolerating a low amount of fraud is that your product won't be hostile to your legitimate users. The benefit to tolerating a low amount of crime is that you will live in a free society rather than a dystopian tyranny. Freedom is good and it is worth giving up quite a bit of safety for the sake of being free.
I said this somewhere else, but there’s 2 things at play here:
- A utopia where people don’t defect in prisoners dilemmas (most types of crimes like shoplifting: the store won’t have to hire loss prevention and cashiers, and you pay less for their reduced costs) is ideal, but:
- Such a utopia doesn’t and can’t exist because defection individually increases utility at the cost of everybody else. Hence cashiers, loss prevention, KYC, etc.
Thus the real world is a careful optimisation problem where we have to search for an equilibrium at which society as a whole benefits the most. People can argue all day about where this is, because the trade offs involved are non-obvious:
- More surveillance means, all else being equal, less crime, but police officers can defect too and only arrest minorities and use said surveillance for something else, etc.
The problem is walking through a very high-dimensional search space, and we humans are had at it. There’s no real solution though, because individual incentives don’t line up to solve it.
I’d argue that the optimal amount of crime is zero but the optimal amount of possibility of crime should be non-zero. That’s a necessary escape hatch out of a police state or authoritarian government. After all, the resistance against the Nazis was technically criminal at that time, even though now we’d all agree it was a good thing it occurred anyway.
It is especially important nowadays because unlike back then where technology was limited and surveilling 100% of the population was impossible, it is very much possible today and is already being done in certain places such as China.
Does he mention this somewhere? Last time I spoke to him, he was working on Stripe Press, his interest in fraud and spam prevention long predates his work at Stripe.
If you define crime as violating the anarchist non-aggression principle, then it makes more sense. The only problem is that the state would be the largest offender.
Nazi laws weren't moral, as it's not moral today to demand half of my profits or I go to jail.
You just picked your own idea of morality and decided to elevate it above others: you chose the "anarchist non-aggression principle" as somehow morally superior to other ideas about how crimes should be defined, and decided that with that definition, targeting zero crimes makes more sense.
But the whole point is that we will never universally agree on a morality because society's overall preferences shift over time. So targeting zero crimes never makes sense.
> ts better to let 10 criminals get away with it than to convict 1 innocent person.
is arguably false. it forgets that 10 criminals had 10 or more victims. If you optimize for the least number of victims then it's easily possible that convicting a few innocent people has a net positive in lowering the total number of victims including the victims of being wrongly convicted
to put it another way, perfect is the enemy of good. In this case if in pursuit of perfection of having zero wrongly convicted you end up causing more victims of criminals then you've arguably failed
I also believe that it’s better to let 10 criminals get away with it than it is to wrongly convict 1 innocent person. And I’m fairly sure that all the innocent people who were unfortunate enough to go through the court system would agree with me.
Also, not every crime must have a victim. There are a million victimless crimes.
Yes, it’s also debatable whether those should even be crimes (in my opinion - no), but the argument that 1 crime = at least 1 victim is flat out false.
also you too made the exact same error. you discounted the victims of the criminals. yes the 1 innocent wrongly convicted is bad but what about all the innocents that are victims of the criminals. You absolutely have to add those innocents to your total of how many innocents you helped
if you catch 10 serial killers and 1 happens to be innocent you still saved 9-18-27 lives in exchange for one innocent. If because of over zealousness of zero innocents being caught you only catch 5 serial killers you saved 1 extra life and forfeited 5 to 15 others
You arguably believe what I'm saying. no law enforcement can be perfect so it's guaranteed that innocent people will be mistakenly convicted. The only logical conclusion is if you truly believe there must be zero innocents convicted then you believe law enforcement should not exist since there will never be perfect law enforcement
it doesn't forget that. it implies that you shouldn't optimize for the least number of victims. it's cool to disagree with that and think about why or why not, but please actually engage with the idea rather than just assuming they didn't think it through at all.
> The optimal amount of crime in a society is non-zero because a society with zero crime would be a dystopian police state where innocent people sometimes get caught up in the justice system's net to make sure it catches all of the criminals.
At this point you're just playing with the definition of crime. I would argue that it is criminal to deprive an innocent person of their freedom, and challenge that your proposed scenario is actually "zero crime".
Secondly, you talk of catching "all of the criminals". In a "zero crime" environment there are no criminals - by definition if there is a criminal, then a crime has been committed at some point.
All that said I agree with your larger point - the cost of freedom is that people are not constrained before the fact from committing crime, and that's a good thing on the whole.
We don’t need there to be a benefit to a low amount of fraud to optimize for it. Optimization is a purely mathematical exercise [1]. Once we construct the problem with a chosen set of constraints then we apply mathematical techniques to solve it. Of course, many types of optimization problems (especially non-linear or non-convex) can be extremely difficult to solve optimally without relaxing some constraints or settling for approximations to the optimal solution.
But, besides that, the task of interpreting the results and of potentially selecting new constraints or even a new objective function is a separate matter. Perhaps we should be seeking to maximize trust rather than minimize fraud in society. But then we have to ask ourselves: “what would that look like?”
There does not need to be a set of constraints for optimisation to be defined. You can talk about optimisation on an unconstrained domain, for example all of ℝ⨯ℝ. But there DOES need to be a measure function that measures what you are optimising for. The benefit of fraud would be one such function you could optimise for, and that seems to be what GP is after. The pure amount of fraud is a different one, which seems to be what you are interested in.
Even without trust, you will reach an optimal amount because preventing fraud tends to become more expensive than the fraud itself, once you cover the simple and easy cases
The benefit to a low level of fraud is that people are still looking out for fraud, so the society is more robust. If there was no fraud, and someone just invent fraud (it will happen), the damage could be devastating.
they would have to be Moriarty level like in their inventiveness, most people start low and then expand their inventiveness with experience, the low early frauds would establish the new invention in the society.
I meant optimal fraud at 0 as in a utopia would have no fraud whatsoever; a utopia where everybody cooperates in prisoner’s dilemmas, where I can lend a stranger my phone and not worry they’ll run off with it, and where cashiers don’t exist because you can count on people leaving money as they walk out the store.
Obviously this utopia doesn’t and can’t exist: people defect because it works against cooperators. Fraudsters are people who defect in the societal game of iterated prisoners dilemma, and thus we have to build defences against them until some sort of Nash equilibria is reached.
So I guess I did mean optimal in two different ways. One is a utopian cooperative paradise, the other an optimisation problem for an optima where businesses make the most money and society overall is richer than if business activity got crippled.
> So then we may ask: “what is the optimal amount of fraud in society such that the costs of legislation, education, and enforcement do not exceed X% of GDP?” and that is a different question.
It's also not a question of any particular interest; you're interested in what maximizes (good - bad), not what maximizes (good / bad).
>reducing friction helps drive more legitimate business.
A very real example in retail. I can minimize the possibility that I'll be hit with fraudulent returns. Require a receipt, short window, store credit only, must be in like new condition with all packaging, etc. (Or just sell everything on an all sales are final basis.) Different stores do many of these things to a greater or lesser degree on at least some merchandise. But you'd probably better be offering really good prices if you do.
> I guess this can all fit within a “marginal cost” explanation though.
Yes, but it undermines the first point, a bit. There's costs-- direct and social costs-- to making transactions hard; so perhaps optimal for a society is still not 0.
Also, there's nothing to say that the amount of fraud is stable and that we can't find a world where we have better mechanisms to reduce it for the same cost. (Improved technology, legal structures, norms, etc).
Some of those aren’t analogous. Your Covid example: there’s also the cost _to others _ of you catching and spreading it, even if the risk to you is lower.
Speeding is another example: the cost (or risk) might be acceptable to you but not to the person you have an increased likelihood of hitting and doing serious injury to.
At a societal level, it holds, which is why we invest in measures to increase the cost of doing the wrong thing (speeding tickets, removing licenses).
An analogy that may resonate with readers here is that targeting zero fraud is like targeting 100% uptime in a computer system. You evaluate the business trade-offs and decide how many 9s of non-fraud are appropriate, knowing that (1) each additional 9 is more expensive than the last but only gives you 1/10 of the benefit, and therefore (2) infinity 9s (equivalent to zero fraud/100% uptime) is a useless aspiration for all practical purposes.
That's incomplete, though. The business running the computer system would bear all the costs in attempting to target 100% uptime.
Targeting zero payments fraud does mean the business has to bear the costs of the fraud prevention measures, but their customers also have to bear intangible costs, like the annoyance of a detailed, invasive know-your-customer process before being able to buy anything.
But if I'm a user of this computer system that targets 100% uptime, I don't have to see any of the downsides/costs that the business incurs to try to get that uptime. I just see great uptime, and it's all rosy for me.
I think it's important to acknowledge that, in pursuing lower (or zero) fraud, both the business and its customers have to bear costs related to that goal.
> The optimal amount of fraud a business/industry should accept is non-zero
Let's make that: "The optimal amount of fraud a business should accept under the current credit card online payment system is non-zero".
There is absolutely nothing intrinsic about online commerce that requires fraud. Online business routinely operate with a money first, zero consumer trust paradigm. They ask for my payment credentials first, and only then deliver the products.
If we were to design the online payment system from scratch, we would use cryptography to completely remove the notion of credit card theft, and escrow to settle consumer complaints, with an option for paid arbitration when things go bad. I guess you can call some of those cases "fraud" and some customers are so unreasonable that they border on criminal, yes, you can't make that segment zero, but I don't think that's the kind of fraud they are referring to.
The reason we can't have those nice things is because of immense momentum of the current system designed in the 60s by companies that have very little reason to change anything. In fact, an online payment reform would most likely strip them of their oligopoly. So yes, the optimal fraud level is non-zero because Mastercard, Visa etc. can push that fraud onto consumers (via retailers), and they are making much more money anyway from the current situation.
If you had zero fraud in society then nobody would build in any defenses against fraud at all.
You'd have a society of completely naive and trusting souls, which sounds blissful until someone wakes up one day and realizes that they can commit as much fraud as they like since society has no defenses against it.
It is like saying that the optimal amount of disease is zero, but if you have never had your immune system challenged by any kind of disease, then the first virus you come across will probably kill you.
Your suffering from childhood colds and getting burned by something like the car-out-of-gas scam help build defenses.
I like to think of it prisoner's dilemma style: the optimal amount of fraud is zero, the same way the optimal outcome for both prisoners is to both cooperate. But the equilibrium at 2-cooperate is not stable, somebody will gain more utility for themselves by defecting, the same way with 0 KYC on any and all transactions fraud is laughably easy.
Thus, the global optima is one where there is no fraud and everybody cooperates, but it's not a stable optima and we slide to the real world where there are tradeoffs for preventing fraud, and there reaches a point where rational actors deem the tradeoff unacceptable and thus accept some level of fraud.
Not a perfect analogy with disease. And maybe it should suggest to you that maybe in fact, the optimal amount of fraud is zero.
With disease, the optimal amount is likely still zero. The immune system we have is not great, its only selection criteria is to keep people alive long enough to have children and keep them alive long enough so they can. We're beginning to understand, like with HPV, that anything from a life threatening case to an asymptomatic one can cause lifelong changes in the immune system and either cause or increase lifelong risk of non-infectious diseases.
And that's setting aside that we can, for example, just eradicate certain diseases if we set ourselves to it. Polio, smallpox, and hopefully malaria.
If we could eliminate certain kinds of fraud - through education, through making it impractical - that seems good, yeah?
But I think you just showed that its really pretty similar.
And my gut reaction to the title of the article is that we really need less fraud and that we're very, very far from optimal right now. Although I'm not very concerned at all about fraud against government programs to help the disenfranchised. I'm more concerned about the endless e-mail, phone scams, door to door scams and all the stuff that prey on the elderly and vulnerable.
Similarly with the immune system it would be interesting to consider wiping out Epstein Barr and maybe eliminate Multiple Sclerosis, along with exterminating the mosquitoes that bite humans and cause disease.
But zero is likely not achievable or a stable optimum, and we're probably not going to cure the common cold or wipe out influenza and we may not want to (at least not without quite a bit of science fiction, global access to medicine and nearly 100% acceptance of vaccination in the population).
I think we're in agreement - the analogy to disease was flawed because eradicating some (perhaps many, all?) kind of disease would have widespread, uniformly positive effects.
The person I replied to seemed to think that exposure to disease helps in youth? Certainly seems like a widespread idea, but I don't know how true it is.
this explains things significantly better than the article, which seems to be little more than dragging out a surprising-sounding headline with a pretty obvious concept
The reason I went to the trouble of writing it was that many, many people in both business and the finance industry do not agree it is obvious and a good portion do not agree it is true, and they take actions consistent with those beliefs, which harm themselves and others.
To be more specific, the article mimics the topic of a counter-intuitive "surprising" truth (like, for example the goat problem; or flaws in human cognition), while letting the reader down by making an obvious, easy to understand truth unnecessarily complicated.
That you think something is obvious is a fact about you, not about the world. People are not downvoting you because your English has any problems. They are downvoting you because they you’re wrong and arrogant.
This optimal (#) can and probably will change soon. We all carry around phones capable of trivial non-reputable verification, and centralised digital cash (not bitcoin but BankOfEnglandCoin) is technically feasible. So it's quite technically feasible for every day to day transaction to be completed with
with the sort of KYC verification currently reserved for say house purchases.
It's just the political / societal implications. These are beyond "hey it's expensive for banks to cut down on fraud"
I disagree with the "banks should allow certain levels of bank fraud because X" for the simple reason we don't have "banks should provide interest free funding to murderers, sex traffickers, pornographers and drug ring" even though that is often the same thing. (And in a two page HN thread I am sure I am not the first to say that)
(#) someone else mentioned the difference between ideal and optimal which is a very good distinction.
I doubt it. The current system is a local optimum. Better local optimums already exist elsewhere.
In The Netherlands, direct online payments using debit cards are very common. These are secure payments, verified through a bank’s mobile banking app or internet banking with 2FA.
This means there is no risk for the seller that a payment gets reversed. There is fraud, but it centers mostly on social engineering people to authorise payments for others, or to mail their debit card to “the bank” for “recycling”.
Cost per payment: about 30 cents.
Meanwhile, in other countries, credit cards are the common online payment option. Security? A number on the front of the card and a “secret” second number on the back of the card.
Cost: 1.5-3.5% of payment.
Better security is possible, but it’s hard to move from a local optimum when you’re locked into a certain ecosystem.
The credit card no-security scheme works because everyone gets reimbursed for fraud. It comes at the cost of retailers handing a few percent of every transaction to intermediaries, instead of just a few pennies.
At some point (maybe already) we will perform 50% of GDP online. That makes the Visa network essentially a seperate private tax collecting entity. I get the "local optimum" - it's hard to break. But if anything motivates governments it's competition
I would not call anything in the fragmented, legacy US financial system “trivial.”
It took us a decade and counting to get chipped cards, longer to get contactless pay, and even then we don’t really use the PIN part of chip+pin. Something like FedNow is only coming next year.
I mean every central bank could tomorrow just put up a non-permissive (#)
blockchain and just make a virtual coin for every cent out there. And this would cause utter chaos. It would essentially end fractional reserve banking. That makes loans ... difficult.
The impacts are enormous, but a digital native currency is so simple, so attractive we may well try it. And then have to rethink our financial regulations. It will look a lot like ICOs.
I still think it is inevitable.
(#) ok the terminology I find either dubious or I misunderstand but basically
every wallet holder gets their private / public key registered, then there is a known state of money globally, and the Bank is a verifying party to each transaction. Something like that anyway. Theee are many options but essentially if we all "trust" the money printer then the technical problems simplify.
Yeah but there is not a digital version, issued by the BoE. It's not a digital native so has all these layers on top.
I think a clean simple version is appealing - it's kind of the case for Bitcoin as a whole. Unfortunately they people attracted to that case did not realise that 75%+ of regulation and layers is trying to protect people from sharks.
Great explanation. But I'm not so sure about "The optimal amount of fraud in society is 0".
Especially if we broaden fraud to include other crimes. There are costs to prevent other badness in society as well. Firstly it's the cost in taxes/allocating resources to its prevention: Do we really want to allocate a really large chunk of our shared human capital to police marginal criminal activity? How much more polices, judges, attorneys, lock makers, etc would we need to stop the last bike theft?
Secondly and arguably more importantly is the cost of freedom. A lot of the digital surveillance initiatives that are discussed and dismissed here on HN are enforced in the name of zero tolerance against (really bad) badness in society.
I think its hard, or impossible, to create a somewhat large society with zero crime rate. At least if we still want even just a sliver of the freedoms we are accustomed to in liberal democracies.
I think the point is that in a theoretical society in whcih there are no bad actors, and there is no cost to prevent fraud, the optimal amount of fraud is zero. That is, there isn't a reason you would want to encourage fraud, because a little bit of fraud is good. But when you also consider the cost of reducing fraud the optimal state for the system as a whole will have a non-zero amount of fraud. And of course, bad actors do exist, so in a real system you want to accept some amount of fraud.
The difference is significant, because if you discover a way to significantly reduce fraud for a low cost (including cost of freedoms and similar), it will be worth implementating. And there isn't some point where you say "we are already down to x% fraud, we don't want to go any lower than that, even if it doesn't cost us anything".
Hmm. I think my mental model is more that it should be "randomly" enforced. The probability of getting caught is higher than some certain threshold, but that it's not necessarily bad if that threshold is lower than 100%.
I can't think of any resonable society that have taken actions to show that they want the probability to be 100%. I would even argue that the most harsh dictatorships probably have the highest enforcement, but that laws were/are very selectively enforced in the favor of e.g regime officials.
This, definitely. But also - at the social policy level, there are two additional issues:
- Outsiders: It's good to keep members of your society fraud-savvy enough that they can safely travel & do business outside your society...without being easy marks for fraudsters.
- Stability over time: If your society somehow gets fraud down to ~0, that'll lead to big cut-backs in anti-fraud efforts, "end of history" dreamers proclaiming that fraud has died, etc. Which is obviously a set-up for a sudden huge resurgence in fraud.
More generally--the cost to eliminate bad outcomes goes up exponentially as you deal with the easy bad outcomes. Credit card fraud is simply one example.
Or consider a simple non-financial example: I left half a dozen pears on the tree this year--getting those last few pears would have required hauling a 50 pound ladder around the house and then struggle with setting it up. (Due to it's size it's a lot harder to handle than it's weight indicates.)
There are also arguments that a certain amount of rule breaking is neccesary in society to support innovation. A society with no rule breaking becomes static.
> This is why important transactions like banking have KYC checks, and buying a pair of sneakers don't.
Banks do KYC checks because it is required by law, not because it does anything to reduce fraud. Fake IDs are a thing. Requiring identification does not make transactions safer without a lot of other stuff happening too.
"optimal amount of fraud in society is 0" - are you sure? why?
Bad Things(tm) are useful for testing and improving safety/security, and when I see people/institutions with no experience reacting to Bad Things(tm), I know they're in for a world of hurt when it does happen.
Perhaps you mean, the optimal amount of fraud that isn't prosecuted... or not detected... ? Even then, I'd argue that there's a tiny percentage that's useful for keeping the safety/security industry on its toes and at the ready.
As a proof point, if you believe that war (world peace) is not a solved problem, then it's only a matter of time before your city/region/civilization/race faces an existential threat, for which the only true preparation is to be ready to innovate and mobilize.
Sorry if this comes across as dark. I mean it in the same vein as having a small percentage of farmers is desirable.
By contrast, I visited a traditional silk factory in Stockholm (amazing btw) and the craft has been lost to the point where they're struggling to find craftspeople able to work their looms and other old equipment. See Jonathan Blow's excellent talk about lost technology: https://www.youtube.com/watch?v=ZSRHeXYDLko
This whole article is one giant time sapping piece of click bait.
The author makes the unexpected claim that businesses want a non zero amount of fraud. And so as a reader you are tempted to read on because you haven't heard this before. But essentially the argument is that fraud is needed as an unavoidable byproduct of allowing trust/credit in the system to facilitate transactions. However, if businesses could have the trust without the fraud of course they would. I wouldn't be so upset if the author had been more upfront about what this was about. I'm sure there are plenty of people out there who are learning about the fraud and trust/credit relationship for the first time. Just don't try and spin this in a way that it isn't.
There is an interesting thought experiment you can do. Imagine a world with 100% honest, rule-abiding people. What are the consequences of such a world?
The initial things you realize are, no keys, no locks, no gates, no passwords. ...but it gets even more profound the more you think about it. No police, no military, no cashiers, no ticket collectors, no bouncers, no bartenders (for beer/wine), no security guards, no prisons, no weapons manufacturing or sales, no security cameras or systems, no cybersecurity professionals or monitoring software, no criminal judicial system, no financial enforcement agencies... ...and how many industries would function far more cheaply such as insurance, unemployment, credit cards, and healthcare, due to no fraud?
It's actually staggering how much of society is structured purely around a lack of trust. It's easy to imagine that security is responsible for a huge portion or all human GDP/budgets - maybe 50%?
...and what percentage of the population is really responsible for causing this? It is 1%? 5? Or maybe it's much more? Maybe most of us are not criminals because of the enforcement?
If we could program in obedience in people - what leaps and bounds we could achieve!
But more realistically, there is an equilibrium that exists between dishonest behavior and efficiency. The more common dishonest people are, the more expensive the entire system becomes. ...and it's not at all linear. A change from 0.1% dishonest behavior and 1% dishonest behavior probably results in an outlandishly more complex security setup.
> It's actually staggering how much of society is structured purely around a lack of trust.
You've ignored one huge category: disagreements. We can all observe the rules, but we may not all come to the same conclusions as to how they bind our actions. Reasonable people can disagree without being "dishonest."
Further, you're pre-supposing a list of rules that does not and does not need to change. Which is far less profound than you make it out to be.
This reminds me of War of the Worlds, where the martians had no diseases and thus no immune system. When they came to earth they died from diseases.
A society like that, with no defenses, would be very vulnerable. That's why it's better to actually have some bad actors to keep "selective pressure" on societies so we evolve our defenses.
I agree whole heartedly. Often wondered the same. But I'd always think a small amount of conflict is needed to keep our defenses evolving in case we ever come into contact with a society that would be more sophisticated than us in that regard. The same applies to computer viruses, pathogens, and even scams and haggling. I know at least some of these been explored already in fiction (Pandora's Star, War of the Worlds, Bender's Big Score).
This is fanciful but ignores that a huge amount of this system is in place because we can’t even agree on what is correct.
> It is 1%? 5? Or maybe it's much more? Maybe most of us are not criminals because of the enforcement?
Far more. The number of people who speed consistently and only slow down when they see a cop is the most visible evidence of this. Marijuana use was something like 20% of the population before any legalization passed.
I feel like this starts with an agreeable premise. Some fraud is egregious, costly, and/or easy to detect. These low-hanging or high-impact cases are most worth pursuing. At some point you reach diminishing returns, where the amount of time / effort / capital you're putting in to eliminating fraud outstrips the losses from the fraud itself.
I don't know that I agree with the ethical conclusion that the optimal amount of fraud is therefore non-zero. The leap from "anti-fraud efforts are expensive" to these sentences in the final paragraph was not, in my opinion, convincingly made here:
>We should, as a society, accept non-zero amounts of benefits fraud. We should accept non-zero amounts of cheating on taxes.
I don’t know if that statement is backed by the article, which I will admit to not having read, but in general I agree. Completely eradicating benefit fraud will necessarily increase the burden on legitimate claimants to prove that they are in fact legitimate. Doing that is going to place enough burden on some people who should otherwise be able to claim that it results in them not doing so, or failing to do so because they were unable to provide the required evidence.
I’d much rather see a few people who didn’t need benefits manage to claim them than see people who do need them be left without. The first option costs tax payers a bit more money. The second results in people’s lives being made significantly worse, and in some cases in deaths.
Also, not means testing universal benefits means everybody appreciates them as just something their society does, so that reduces stigma for the beneficiaries and increases pride in your society. "We ensure children in this country have nutritious food" not "Why are my taxes going to feed this 10 year old whose mother has a full time job".
I grew up in an area where many parents could afford (maybe if they budget carefully, maybe just anyway) to privately educate a child. But they mostly didn't, because the government funded schools were pretty good. In fact, as children it was actually a minor stigma to be privately educated, because if your parents are spending a lot of money on the fancy school, either they don't know how to spend their cash (so they're stupid) or you're really stupid and they sent you to that school in the hope of making up for it. It was seen as like easy mode. Smart kids don't go to private school, why would they waste the money?
> The first option costs tax payers a bit more money.
The first option costs taxpayers significantly more than a ‘bit’.
Just look at how much it cost when they basically turned off all the checks in order to get covid relief into the hands of people who really needed it. In Arizona, after a while, they made it so you had to sit on (virtual) hold for 8-10 hours to verify your identity with a human or they would cut you off. Which worked well enough to ensure only the people who really needed it went through all the hassle. It really sucked for those people but they stopped sending billions of dollars overseas to people who just googled someone’s address.
> At some point you reach diminishing returns, where the amount of time / effort / capital you're putting in to eliminating fraud outstrips the losses from the fraud itself.
That's not quite what I got from the article. I read it as the more friction you put in place to prevent fraud, the harder it is for legitimate transactions to happen. Therefore, it's not so much about the cost of the fraud, but the opportunity cost of legitimate transactions which don't happen in the zero-fraud environment.
I appreciate how you phrased this. It has me thinking about how it might be similar for privacy and security in terms of information or even physical security. Yes, one can be super secure and safe from harm if one puts tons of locks on everything, but it also keeps out people who we might want to let in.
Actually, now I'm thinking about it emotionally as well. Best way to prevent myself from getting hurt is to close off as much as I can. Also the best way to prevent myself from feeling joy and all the other things I want to feel.
Emotions are a little different though. The more you know how deep the lows can go, the more you appreciate even the smallest highs. There is a little utility in getting hurt.
Good point. I agree with the overall thesis; there are a lot of things that get increasingly expensive as you approach perfection. (Perfection is still a useful guidestar, but each step toward it has to be made with costs in mind.)
However, I'm not nearly as breezy about $20 billion annually in fraud. Maybe that's fine from the perspective of the merchants and credit card networks. But from the societal perspective, that's subsidizing bad actors. People and groups who will not stop at one kind of crime as they try to grow. People who will divert other people into being parasitic. That's not healthy for society or for the individuals who end up living lives of crime.
So I think the society-optimal level of fraud is way below the merchant-acceptable amount of fraud.
The problem is not merely that the anti-fraud efforts are costly but that the anti-fraud surveillance apparatus will itself be value destroying. (In the tax case, it’s “people in democracies don’t enjoy their government having total visibility into their activities and society, in its judgment, says this is more important than tax collection at some margins.”)
This is not an ethical conclusion. This is a pragmatic and utilitarian conclusion where 'optimal' means minimising the cost/benefit ratio.
Incidentally, this shows that the 'perfect' ethical stance is not necessarily the one that delivers the most benefits at the least cost, aka when ideals meet the real world...
Yes, we should not accept the existence of fraud. We should simply be able to recognize the situations where fighting fraud is more costly than letting it exist.
Not that it really matters in most places since we are quite far from that point anyways.
It feels like a very subtle is-ought distinction, where the author is discussing something that unavoidably is the case and therefore concludes that it ought to be the case and therefore ought to be accepted if not even welcomed. The marketing example makes this pretty clear. Of course no one thinks the marketing directory could spend zero on marketing. But…surely they would love to spend zero if they could still get what they wanted for zero money.
not if letting 10 guilty esacpe breeds more and more bad actors. you can make the argument that a few innocent suffer is a net benefit for society in tne same way. in pursuit of 0 innocent suffering you will capture no bad actors
To put it another way you're forgetting the victims. The 10 fraudsters made 10 people suffer. their suffering needs to added to the equation
The current system from that perspective works great. You have to get pretty unlucky to get convicted as an innocent, especially in this day and age where a cop’s testimony isn’t worth nowhere near as much as it used to be.
And morally alone it’s far worse to wrongfully convict an innocent person than to let a guilty one go free.
Not an ethical conclusion but a pragmatic one. The ethical part is what you do after the fact:
1. Pass the cost towards self regulation of people, using client facing measures e.g. prove their innocence if they are an outlier
2. Catch a couple of cases and over market your policing ability to disadvantage the most gullible.
3. Catch a couple of cases, even minor infractions and destroy them with disproportionate fines or jail sentences, economy of randomness or economy of those who have the best lawyers.
Fraud against government, as above but add:
4. Add arbitrary constraints, you don't really want the system to work, you just fake it for political reasons
The ethical issues of accepting nonzero fraud are that striving for zero fraud creates program design changes that lock people out of benefits. If you design a health care system that aims for 0% fraud, some measurable number of people are going to be deprived of care because the registration and billing procedures are too onerous. With taxes, aiming for 0% noncompliance will prevent people from taking advantage of deductions and credits.
This isn't hypothetical; it's the issue underling the "program design" controversies about means-testing in public policy.
Not to mention that enforcement has rapidly diminishing returns. Even if your only goal was to maximize tax revenue (minus cost of tax administration), and you didn't care at all about people being able to take advantage of deductions, the optimal amount of fraud is almost certainly non-zero.
(And of course, if you did want to maximize tax revenue, you'd focus enforcement on the big fish.)
I would think the strategy would be to encourage low impact fraud with lazy compliance and making a customer whole (Credit card chargebacks). And then hunt out and destroy high impact fraud.
With the intent to incentivize and train criminals to stay small and low impact.
If you're a retail platform, and you have a few scammers making a few grand of 20-100 dollar scams. You can play wack a mole with them and then that keeps people doing that small fraud rather than leveling up and potentially doing crimes that could endanger the whole business with the exposure.
> I don't know that I agree with the ethical conclusion that the optimal amount of fraud is therefore non-zero. The leap from "anti-fraud efforts are expensive" to these sentences in the final paragraph was not, in my opinion, convincingly made here
It’s like saying that the optimal dirtiness after cleaning your house is non-zero (greater than zero) because cleaning it perfectly takes much more effort than it is worth!
That’s not counter-intuitive at all. It’s just an obvious fact stated in a silly way (for clicks or whatever else).
That doesn't mean that the optimal amount is nonzero. Taken in isolation, the optimal amount is clearly zero. The optimal amount doesn't change based on the cost, the optimal amount of effort to expend is a different answer.
It's not just stated in a silly way, it's stated in a way that's incorrect because they didn't mean what they said. "The optimal amount of fraud is nonzero" does not actually mean the exact same thing as "in an optimally-beneficial fraud prevention effort, the amount of fraud is non-zero".
>Taken in isolation, the optimal amount is clearly zero.
But the very point of the article is to not take zero-fraud in isolation and instead, explain how non-zero-fraud is an unavoidable tradeoff when balancing 2 simultaneous goals:
- (1) prevent fraud transactions as much as practically possible
- (2) make legitimate transactions as easy as possible
If one accepts the premise of pursuing those 2 goals at the same time, then by definition, we're no longer talking about "in isolation". You've now unavoidably entered non-zero fraud territory.
Perhaps it's the author's particular wordsmithing of what he's trying to convey that just rubs many readers the wrong way.
> Taken in isolation, the optimal amount is clearly zero.
The post makes it clear that the discussion is not about theory or taking anything in isolation - it's about fraud in the real world. In that context, the way it's stated is correct - if you have zero fraud in the real world, that means that you designed the tradeoffs wrong and that the cost of your fraud prevention (in terms of actual dollars as well as inconvenience to customers, etc.) is greater than the overall cost would be if you allowed a small amount of fraud to occur (looking at the total cost of that fraud as well as the cost of preventing additional fraud).
I suppose the problem is that whether or not the title of the post is true or not depends on the context in which it's taken, and the title itself doesn't have any context. Since the post does offer context, though, I think it's reasonable to take the title in that context.
It's like cleaning old painted metal with a scouring pad; You want to clean thoroughly enough to take off the grime, but if you scour too long or too hard you'll end up taking off the paint itself. You'll always either leave a littke dirt behind, or take off some paint, never perfection. You could strip all the paint and repaint it, but that's so much more costly in terms of time and materials that it's a whole different task.
And the argument that more stringent anti-fraud protections increase the burdon on legitimate claimants is absolutely spot on, and has parallels in all sorts of other legal, financial, and market situations c:
Targeting zero is an immature approach that is self-destructive in most cases.
If your incentive is to have zero fraud, the organization will find ways to not detect fraud or add so many controls and audits that the cost of doing whatever will go up.
There’s a balance. In the tax world, the de-clawing of the IRS for certain things have dramatically impacted compliance. You want enough enforcement that you’re discouraging median cheater, but not so much the cure is more expensive.
We can start with payment. What would someone pay with? Credit/debit numbers can be stolen. Checks can be stolen or forged. Cash can be counterfeit. What form of transaction has zero chance of fraud?
To make transactions available to people you need to introduce systems that can have fraud in them. There is a balance between availability/ease and fraud.
Bitcoin. It can easily be confirmed as valid (zero chance of counterfeit), and is otherwise a bearer instrument with no further settlement, and impossible to reverse (like cash).
The problem with accepting it is that people figure out repeatable tricks to get around the system.
If we view those repeated tricks as business as usual - we should probably make them accessible to everyone. Otherwise the small fraud becomes rampant.
This is an extremely long-winded article/blog to say the following
> the policy choices available to them impact the user experience of fraudsters and legitimate users alike. They want to choose policies which balance the tradeoff of lowering fraud against the ease for legitimate users to transact.
You encounter well known tension pattern several places. For instance, in safety critical systems there's a tension between safety and progress. Or take IT-sec industry; tension between usability and being secure.
I work in IT/AppSec, and this came to mind immediately. Implementing perfect security would be "don't connect to the internet and don't let anyone use the computer". Clearly not an option, so my job is to analyze the cost and risks against the benefits and help choose a path of balance. A specific example: we can only heuristically detect the difference between legitimate and malicious calls to the public endpoints. Is that spike in traffic trying to DDOS us, or is it close to Black Friday so customers are in go-go mode? Setting the rate limits somewhere meaningful is a tradeoff.
There was a study done on a tribe of wild monkeys where mutual grooming to remove ticks/fleas/lice happened. Some monkeys 'cheated' and didn't pay forwards the grooming they received. The study concluded that as long as cheaters were less than 5% of the population then mutual grooming continued. when the number of cheats exceeded 5% the system broke down and no mutual grooming happened for some time.
It seems that a society can bear a certain amount of cheating before the system breaks down, a 'tipping point' of sorts. As long as we keep the cheating below the tipping point, the game continues, which is after all the most important aspect, I think.
That's a lot of words to say "to make fraud harder you have to make buying from you harder, the optimal amount of fraud is the amount of fraud you get when any additional measure you could take against fraud would lower your revenue more from lost business than it would lower your costs from people committing less fraud"
Something related that I've noticed in government projects is that they will spend $100K on a tender process to eliminate a fraud risk of 5% that amounts to at most $10K if it does occur. So if you amortise the total "value" of the fraud, it's 10,000 x 0.05 = $500!
Spending $100K to avoid a loss of $500 is something most sane businesses will not do, but to government this makes perfect sense, because they have a rule that the acceptable amount of fraud is zero.
Hence, they'll spend nearly infinite resources to try to bring fraud down to closer and closer to zero.[1]
You see similar things with risk aversion. Some risk is inevitable, but again, government departments will cheerfully blow billions of dollars to avoid the slightest risk. Projects like ITER and the SLS are highly risk averse and their costs reflect that. Meanwhile smaller, newer, more risky projects will run circles around them.
[1] At least what is perceived to be zero. In actuality fraud remains rampant, but as long as it is technically legal, it is not subject to this rule.
> Spending $100K to avoid a loss of $500 is something most sane businesses will not do, but to government this makes perfect sense, because they have a rule that the acceptable amount of fraud is zero.
In short: no. That's the perception but is not correct, at least security risks.
So since you mentioned SLS (you mean CMS and healthcare.gov maybe? Hello from a friend of people who made those things) I assume you mean US government. Now I totally agree that is perceived. Few parts of risk management are mandated at least in terms of the infosec side of the fence with risk management beyond what is in law (FISMA and thus Risk Management Fraework made to address it as a req). The NIST RMF (SP 800-37 and SP 800-53) is very flexible and without even mentioning quantitative methods in those documents would inherently be at odds with your example; it is the opposite of risk management. But I do agree USG staff and contractors perpetuate this fallacy when provided the checklists of high-level recommendations and don't bother reading 800-37 at all, which explains the rationale strategy and approach that explain this example you give is bad and for good reason. They essentially document that not all systems get the same breadth and depth of security across govt in all agencies and projects equally for this reason. It doesn't scale or make sense.
Sorry for the rant. I have it once a week with friends in public and private sector and the perception is true and may happen but the docs and the people who wrote them (also friends) can tell you that is very much the opposite of what's recommended by NIST and those upstream guidelines are those derived from law.
I'm sure that's what ought to happen, but you're not considering the bureaucrat's point of view here. If fraud happens on their watch, and there was a process that could have prevented it, it's their ass on the line. And since it's not their money that they're spending, it's perfectly sensible to spend $100k (most of which will be hidden as time, not capital costs) to not just avoid the potential loss of $500, but to avoid the far more damaging possibility of being accused of negligence.
Let me more blunt: perhaps I am one of those bureaucrats. Your comment espouses views of government bureaucrats as indifferent to wasting "others" money and only accepting poorly executed projects. Most agencies are currently beholden to extensions under continual resolutions for day to day spending money and don't have stable budgets due to Congressional pressure (new year is month's away, but it has been this way frequently for years now). So, you have to fight to be allowed to spend any little bit of many that wasn't allocated as part of ongoing spend (but even I don't know how that works in detail, I am low level), for even small amounts to keep everything under $10,000. I have to ask and wait for ridiculously small stuff.
The prior comment (and my rebuttal) were that it must be intentional re measurement, re the risk of doing and not doing things and what the cost is (a.k.a. risk-managed programs in the parlance NIST helped standardize). No, it is not. As for what you are leaning into: perhaps quantitative measurement of risk is important, but perhaps raw costs are not the only factor? I agree, in some (I am not sure it is over 1/2 and I can say most, but I am not suggesting it is a really small fraction and completely disagreeing with you) or many cases, raw cost is a factor that seems to be ignored. How is that? But there are others where we (as civil servants) probably could try to explain implied costs (I have failed ironically to get people to consider a mathy approach to this a few times) around things you and I probably see as qualitative. But sometimes we in govt have to do things because elected officials in the govt make us do so through changed or new laws. Other times the perception of the very ineptitude and indifference to hard-to-quantify factors you cite is a risk unto itself (just harder to quantify) for the larger agenda and that drives the need to purse it anyway.
I recommend people read this article and cite it often when presenting about how to help government help itself. I started doing that as an outsider, and now it rings truer than ever to me on the inside.
When I worked Starbucks retail, we were subject to a "just say yes" policy. So when a couple came in and said they had forgotten some item, or never received it earlier in the day, I gave one to them without hesitation. It helped that I also recognized them as repeat customers. A co-worker said "you just got scammed" with disapproval. And I explained that I probably did, but we were required to do it even if we didn't want to. Otherwise we risked pissing off honest customers. Or maybe it just made more sense to spend the time serving the next 2 customers faster instead of being suspicious with 1 customer.
Later on, though, I remember pissing one off when he had to wait in line behind people buying drinks and he declared he would not be buying the $300 espresso machine he had come in to buy. I wonder if my actions resulted in a net gain or loss to the store...
> he declared he would not be buying the $300 espresso machine he had come in to buy
FWIW I strongly doubt that people who say things like that ever really intended to buy the thing. If you were really planning on buying a $300 expresso machine today, are you actually going to change your mind because you had to wait an extra 5 minutes?
When I worked retail, I would give customers whatever they asked for because 1) it's not my stuff, 2) it belonged to a soulless corporation that did not need it, 3) I am not paid enough to be a store's loss prevention agent.
But Starbucks had this explicit corporate policy anyway, which lines up with the article and its principles.
And it takes a while to become that realistically cynical about retail work. We were actually treated pretty well, had mostly friendly customers, and got along with management. At least at the time.
> Later on, though, I remember pissing one off when he had to wait in line behind people buying drinks and he declared he would not be buying the $300 espresso machine he had come in to buy. I wonder if my actions resulted in a net gain or loss to the store...
Sorry, I didn’t understand this part. Did he expect to cut in line because he wanted to buy the machine rather than a drink? I don’t get what you were supposed to have done differently. Or maybe I do, but the expectation doesn’t make sense to me, I have never seen anything like that done anywhere.
Our store had a couple registers on either end of an L-shaped counter. We didn't always open both. Our main register near the drinks was open and had a line. He approached me as I was doing some task near the other, closed register, which was also near where we stored the espresso machines for sale. So he didn't want to cut in line so much as to have me/us start a parallel flow for his purchase.
It's not a crazy idea; we appeared to have some spare capacity for it (although we really didn't). And he may have spoken to someone else about it earlier. It also wouldn't have been unreasonable to expect that once he got to the front of the line he would have been directed to the other register to wait anyway. He may have been trying to minimize the disruption he caused to the line. He may have also thought the line was too long and we should have already opened up the second register. We were very efficient with a single-register flow, but customers always tried to start up a second line before it was really necessary.
I'm not confident I did the right thing by him; just that there may be situations where losing a $300 sale may be the most profitable choice.
I’m fairly brand-loyal to Starbucks precisely because of their relaxed attitude towards customers. I remember a few times in grad school going there to work for a few hours, using their wifi, and leaving without buying a single item. I never intended to do so, I just got lost in my work. I don’t think the baristas even noticed.
It sounds more morally acceptable to say, "The optimum level of anti-fraud enforcement does not eliminate all fraud." It's not that there's a nonzero amount of fraud that is optimal — all fraud is bad — but rather that the return on efforts to eliminate the last bit of fraud is negative.
> overwhelmingly businesses simply absorb fraud costs in the same way that they absorb their office rent, staff salaries, and marketing expenses.
I didn't realize that is who usually pays for fraud. I see two problems with this arrangement:
1. The credit card companies, who in some ways are probably in a better position to prevent fraud, are less incentivised to prevent fraud, because they aren't the ones paying for it. For example they could make credit credentials more difficult to steal, by making it so the raw credentials never go directly to online businesses, either by using asymmetric cryptography rather than a number or using an oauth style flow with the credit website in order to complete a transaction. But the credit company would bear the bulk of that cost and it would primarily benefit retailers.
2. Consumers that pay using a method with less fraud risk, such as cash, still have to pay a higher price to cover the cost of absorbing the fraud cost.
On the other hand it does allow businesses to self select how much fraud they are willing to accept.
I personally can't stand PSD2[0]. It has completely ruined the online shopping experience in the EU (for me at least).
I loved the way American Express implemented it. They sent you a one-time passcode on your first purchase with the merchant, and then you could also choose for them to not bother you with any further purchases from the same merchant. I had this enabled by default, it made the experience a million times more enjoyable.
Unfortunately not everyone took AmEx, and I no longer live in UK (or a country where AmEx has presence for that matter), and the way banks in my current country of residence have implemented it is absolutely abysmal.
1. The billing address must be a match 100% of the time, which is painful in situations where you can't specify separate billing and shipping addresses and you want the item shipped to a different address (could be 3 for me)
2. Mandatory 2FA on every transaction, depends on the exact implementation, but typically you must wait for a notification on your phone, and then type in a PIN. In some implementations you have to scan a QR code, and then type in the said PIN. Sometimes the solution they use for this is down.
3. If anything is wrong at all (billing address/mistyped CVV/whatever), the transaction just gets refused at the end of this loop. Was it something you did wrong? Is some system down? Let's try again.
And sometimes this even messes up recurring subscriptions. My Microsoft 365 Business sub that's billed monthly on a credit card GETS REJECTED EVERY TIME UNTIL I MANUALLY GO THROUGH THIS STUPID PROCESS.
It has made paying for things online a chore. I couldn't care one bit about all the fraud this presents, because I was never liable for it in the first place. That decision was previously up to the merchants (who could have implemented all of this if they wanted to). Now it's forced on everyone.
Two-factor authentication is my least favorite thing about PSD2. Back in the day I would simply memorize my credit card data, and was free to buy anything online, anytime. It also gave me confidence during the vacations abroad that if i get mugged on the street I will still have access to my money. Now I need to keep my phone close for SMS codes / mobile app authorization, and I need to keep a backup phone just in case my primary phone breaks/gets lost/is stolen.
tbf that's more an issue with incompetent software devs and more importantly (lest someone accuses me of shifting the blame on devs like a clown would) horrible business product owners. My hope is that Biden's executive order on SBOMs and whatever thing like it which the EU probably has in the works will (unfortunately only slowly) shift the way in which the way business treats software development affects software development culture. (SBOMs may sound completely tangential to this, but in the long run they have a pretty important role to play here.)
This sort of thinking has been prevalent in the payments industry for a long time, and I find it infuriating.
The article is specifically limiting its discussion to situations where a payment credential is stolen. Those cases cost $10-20B per year.
This is HN, so most people here can figure out how to secure payment credentials, especially given the assumption that each credit card contains a tamper resistant computer with durable storage (as they currently do).
Instead of ending credential theft (at least in cases that don't involve violence/coercion), the payment networks pass the cost on to vendors, then advertise fraud protection as a feature to card holders.
This only works because the payment processors' monopoly prevents the merchants from fixing the underlying security issue.
So, the payment networks charge the merchants a large percentage of sales (imagine what your local government could implement if it increased sales taxes by 3-5%!) to supposedly pay for fraud protection.
This is exactly like a classic protection racket, except that the thugs that smash up the business don't actually work for the credit card companies.
(I do agree with the premise that driving crime to zero is usually not worth the cost, but that's just "Innocent until proven guilty", and not the subject of the article.)
Merchants are even more lax about card fraud than banks. The National Retail Federation complained about the cost of upgrading to chip readers. They asked the government to force banks to eliminate PCI DSS which would make it even easier to commit credit card fraud. PCI DSS is compliance not security but without it retailers would literally do nothing. Some retailers tried to get customers to switch to QR code payments linked directly to your bank account. One of these payment apps CurrentC was immediately breached.
Smart cards were also breached before the US switched to them.
I'd object to paying for PCI DSS if I were them, to be honest. The idea that every merchant (or credit card reader) even has access to credentials is ludicrous.
The currentc was of email lists, not the payment flow. It's embarrassing, but still a better track record than the existing payment processors (which probably suffered 10,000s of payment flow breaches as I typed this.)
It's actually pretty simple and intuitive if you put the reason up front, article seems needlessly long:
> the policy choices available to them impact the user experience of fraudsters and legitimate users alike. They want to choose policies which balance the tradeoff
What I don't get is how policy makers can appreciate such nuances and then not see how attempting to ban encryption could possibly break modern society... different policy makers I have to assume.
The literature on the evolution of cooperation, focused around computational thought experiments with iterated prisoner's dilemma, seems relevant here, e.g.,
If you allow a population of individuals repeatedly playing prisoner's dilemma against each other to evolve their own strategies, you end up with a large percentage of the population cooperating with each other by default, but punishing cheaters after they are observed cheating. But a small percentage of cheaters will always persist, because as the number of cheaters goes down, the number of naive cooperators will go up, thus making it more advantageous to cheat.
In evolutionary jargon, cheating behavior undergoes "negative frequency-dependent selection". And you end up with a low, but nonzero, equilibrium frequency of cheaters.
This outcome here depends on the order of rewards/costs: the best outcome comes from cheating on a cooperator; next best is cooperating with a cooperator; then cooperating with a cheater; and worst is two cheaters cheating on each other.
It's a caricature, but the evolutionary dynamics seem to map pretty well to the kind of examples people are bringing up here in the comments.
(The actual "prisoner's dilemma" is rather a confusing story to use, because it's about criminals trying to decide whether to cooperate with each other or betray each other to avoid jail time. So you end up talking about the evolution of cooperation among a population of criminals.)
Some banks used to take a thumbprint when you cashed a check in person. Very few do that now. When they did it, it was more symbolic than useful, because they didn't have a useful checking system. Today, if banks took fingerprints, they'd find out more than they wanted to know, because immediate lookup is possible. It's not their job to filter the entire population for warrants and illegal aliens.
In-person identification is getting really good. Here's HIKvision's new ID unit.[1] Face recognition, iris recognition, fingerprint recognition, and RFID card recognition in one convenient iPad-sized unit. Iris recognition now works at 70cm range, so it can be used routinely. In China, there is no right to be anonymous.
Worth noting: credit card companies absorbing losses varies by country. The US is pro-consumer on credit card fraud, but not on debit card fraud. This differs by country.
Is this something that could be argued about other sorts of crime as well? In particular, in the ongoing fight against encryption that has been widely commented on HN multiple times, can (or should) one (safely) argue that e.g. the optimal amount of online sex trafficking and child abuse is greater than zero? What would be the consequences of taking such a stance once it inevitably reaches public discourse?
You could argue that but as you expect your opponents would quickly paint you as being pro-X. Every decent person would prefer zero child abuse, but few people would support having mandatory police surveillance cameras installed in every room in their house, even if such a panopticon would be proven to reduce significantly child abuse. Us meatbags are irrational like that.
> can one argue that the optimal amount of online sex trafficking and child abuse is greater than zero?
No, this fraud argument does not apply to child abuse or sex trafficking. The reason is because the fraud argument is talking only about direct financial loss of fraud compared to direct financial loss of enforcement. The fraud argument doesn’t actually work if we’re talking about individuals losing their savings, it only makes sense if you assume the cost of fraud is borne by banks, and that it’s a marginal cost and does not bankrupt anyone.
There is no amount of money that makes the damage done by sex trafficking or child abuse okay, and there is no reasonable way to convert the damage done by these crimes into money. To suggest that the optimal amount is non-zero would only be an externalizing of the damage and costs of such crimes, and to essentially reduce our morals to money. And that’s exactly what this very argument does in other contexts; it externalizes non-financial damage, and sometimes financial damage too. This argument is made in other contexts, and it’s sometimes wrong and/or full of assumptions that aren’t true.
We could imagine extreme hypothetical situations that might clarify the argument or how to think about it - is it equivalent if 1% of people suffer a 100mm knife wound or 100% of people suffer a 1mm knife wound? The 1% would all die. In the other case, everyone suffers a mild inconvenience they forget about by tomorrow. Despite the equal amount of flesh damage, these are not remotely equivalent, and thus can’t be compared or declared as optimal. The type of damage done matters, and the number of people affected and amount of damage done to individuals matters.
Beware arguments that reduce negative outcomes to money. These tend to favor businesses (who are biased to prefer less regulation) and tend to externalize all the indirect costs and the costs to society. This is exactly what has been done with regard to pollution over the last century - it has been successfully argued that the optimal amount of pollution is non-zero, and we’re starting to see the consequences of that and pay costs for decisions made long ago. There was a pretty good paper I read [1] that re-evaluated these arguments for several specific large public works projects in the 50s through 70s, where the post-facto costs and outcome benefits calculations were shown to be different by orders of magnitude compared to when the decisions were being debated. IOW there is good historical precedent-based reason not to trust someone who claims the damage will be minimal or equivalent to the case where we put some effort into minimizing it.
Sure, there is a trade off, but they have it wrong for online fraud from stolen credit cards.
The three digit CVV code should be a one time passcode (OTP). Banks have been using these since the 1990s for online logins.
Using 90s technology, the card issuer would issue one of these OTP fobs along with the card. It has the card number printed on it, a button and a LCD screen where the OTP is displayed. The CVV is already sent through to the computer that authorises the transaction, the software that checks the CVV would need to be changed.
So we have a trade off of the user having to have a separate thicker card, to fit the battery, for online use.
I just googled, you can get batteries that are 0.4mm X 22mm x 29mm, a credit card is 0.76mm. Eink is old technology now with the right performance characteristics. I suspect in volume using this technology you could integrate the OTP device in the standard card form factor for less than a couple of dollars a card.
So with a bit of innovation the friction of payment / fraud tradeoff goes away.
This all strikes me as fairly obvious to someone designing these things, is there another tradeoff going on here?
Banks don't have much initiative for investments in IT security. They have insurances.
That's why IT sec all around banking is just the bare minimum required by regulations.
Those sec-specs are also usually at least one decade behind the state of the art… And they get updated only extremely seldom as this would cause "a lot of paper work" at the banks, so the banks are always against any changes to that regulations; and if something changes finally it takes the banks again at least half a decade to adapt to those changes; they can do it like that as the time windows to comply are usually set to be very long, because you know, it's really a lot of paper work…
If each card were a public/private keypair, you could sign a message authorising a payment of X amount at current time, in zero knowledge, without leaking your secret (the credit card number) in every transaction.
Add two factor authentication, if you want, but fix the underlying giant issue first.
This would be more secure than what I proposed, but requires changes that are out of the control of the credit card companies.
For the card to sign the transaction, you need to add some kind of card interface to the users device. Maybe this is what happens with chip cards when you use it at a shop with a card terminal.
I have memorized the CVV for one card I use, and the rest is saved in the browser. So, having to actually get out the credit card would be adding a minor inconvenience. That doesn't matter too much for me, but it probably does mean many millions in revenue for retailers.
I think some people are being a bit too harsh about how the author goes about explaining how you can't prevent all fraud without hurting good users - or in other words, some fraud is just the cost of doing business. Overall it is a good article (that could have probably been a bit shorter) that talks about a topic that is rarely talked about - risk tolerance.
As someone who has worked in the industry for the past 15 years, I can see a few things that I believe are causing risk tolerance levels to increase across the industry.
1. Startups/new businesses that are in growth stage have a large appetite for risk which is pushing the more traditional/legacy companies to also take more risk.
2. High friction experiences that are designed to stop fraudsters require you to provide timely support to any good users that might be blocked by mistake. We all know the trend for most companies has been to move away from providing timely support to their customers as it is extremely expensive. This is another cost (on top of potential lost sales) of creating a high friction experience.
The conclusion/title talks about fraud without any context, which is the misleading thing here. What he means to say is that we have to accept to not fight some fraud because it would be too expensive. The most expensive option perhaps being to not run a business at all, eliminating both fraud and proper sales.
I guess this applies to all crimes, even major ones likes murder and child abuse. We can monitor everyone all the time, or make sacrifices to live in a more free society.
If you think the optimal amount of crime is greater than zero, at some point we are clearly using different applications of the word optimal. One person is talking about the level under the optimal “solution”, while the other is talking about one constraint that still must be balanced against other constraints. The optimal amount of fraud spending is zero, but then we’d be left with a ton of fraud.
Exactly. The optimum amount of fraud is really zero. But in order to achieve last 0.00001% you may end up screwing up experience for about 99% of your customers by asking them to 10-factor auth and what not.
I think a more fascinating look at this is how the difference between "legitimized fraud" versus "illegitimate fraud".
Basically, for most businesses the amount of "friendly fraud" which means customers disputing charges because they changed their mind or didn't want to talk to the company or whatever is 10x the amount of fraud from stolen charges. (Visa estimates this as 3x but my experience is different).
Civil asset forfeiture is the government seizing property without trial, and it is slightly more than theft each year.
So between these things, it seems pretty easy to reduce fraud by 75% without much additional friction.
I've spent most of the last decade working in fraud risk management and I love the message that this article conveys. It's great to see someone saying the exact thing I've innately understood but couldn't put into words :)
This is something I now ask when I try out for jobs in Fraud teams. If my hiring manager expects me to bring fraud down to zero, I immediately know that this work relationship may not work because we would be on completely different pages on how some fraud losses are the necessary cost of running a business.
Unfortunately this is mostly an American issue. CC fraud in Europe is minimal because cards have an embedded PIN required for each transaction. In addition, when purchasing online, an instant pop-up on your mobile phone asks you to approve or decline the transaction within 2 minutes. Contactless transactions under $25 do not require PIN or pop-up verification. These options are considered inconvenient for American consumers so we eat the fraud and sign receipts like is 1989 :-)
Potentially controversial take: this general idea also applies to other areas such as elections. Any sufficiently large election will have to contend with fraud and human error, but this is acceptable as long as the numbers aren't large enough to change the outcome.
If you carefully scrutinize any large election you can almost certainly find at least one example of fraud. However, isolated cases of fraud or human error are not evidence of widescale election rigging.
A lot of the elections in the US in the last 25 years have been pretty close, that's the problem. I guess if they were less close or had some sort of proportional system, it might be less of a problem.
If it's the merchants who carry the burden of credit card fraud why is it that almost all fraud prevention efforts seem to be done by banks/card issuers rather than by merchants ?
Except for a small number of cases involving pre-paid cards, I have never seen a merchant refuse to accept a valid credit card payment for an online purchase. I have however encountered and heard of cases of banks declining transactions they considered possibly fraudulent.
Because the card services are in the business of selling their service to merchants in exchange for a fee, and they have competition in that space. Merchants will (in theory) refuse to work with - or pay as much to - a card service which does insufficient work to prevent fraud.
That explanation doesn't make sense because the fraud prevention/transaction denials are being done by the card holders bank, not by the merchant or payment processor and merchants don't get to decide what issuing banks they will be doing business with. For the most part they either have to accept all Visa cards or none (except maybe for some very broad categories like country of origin or pre-paid vs. non pre-paid).
There is a concept in microeconomics called the Lerner equation. A monopolist maximises profits at the price where gross margin % is equal to -1/ price elasticity of demand.
The intuition behind this is their uplift in sales from a small price cut must equal the revenue they lose on all existing items, and their costs of producing the extra items. So if they have a gross margin of 50%, they need price elasticity of demand to be -2, since a 1% price cut will sell 2% more, raising revenue by 1% and costs by 1%.
The same applies for blocking fraudulent customers, you want your assessed likelihood of fraud to be higher than your gross margin. If I think you have a 25% chance of being fraudster, and I make a 25% margin, then selling to 4 customers I will make 25% 3 times, and lose 75% one time.
If you have more complicated factors like cost of processing chargeback, different interventions like 3DS/manual review, then the threshold is different, but the overall probabilistic framework and calculating breakeven thresholds can still be used.
The author seems to be doing exactly what he repeatedly claims not to be doing: being cute with his phrasing. He tells you he's going to make a case for fraud ceterus parabus, then actually argues fraud naturally arises through tradeoffs, which anyone who has ever made any kind of decision should be aware of. He wasted my time and had nothing insightful to say.
> These tradeoffs are often intensely difficult to pursue openly. Who wants to be known as the politician in favor of benefits fraud or the financial CEO who thinks they are not laundering enough money?
It is very easy to explain such things, if you can hold two ideas in your head at the same time. It goes like this: "We will pursue benefits fraudsters via every method available to us that does not compromise our ability to get benefits to people who need them, which we must never forget is our primary mission."
People used to talk like this. Politicians used to talk like this! Within my lifetime!
Alas, I know the author is right. That thought is too complicated for us now. Monomania is the curse of the age, which is tragic. Life is complicated enough to require thoughtful tradeoffs between several competing variables, and cannot be simlified.
I'm comparing the US credit card system with the chip+pin system common in my country.
* As you need both the card and the code, and as cards are almost impossible to clone, card fraud and identity theft are almost nonexistent.
* Plenty of online shops allow me to buy something without creating an account or providing a billing address.
* As the whole thing runs on debet instead of credit, nobody cares about credit scores.
* A common complaint from merchants is that the system is expensive. My paper merchant recently grumbled he paid around €4000/year. I don't know if this is normal or how much the credit card system costs for a merchant, but substracting these amounts would provide an upper bound to the preferrable amount of fraud.
So while kalzumeus might be right, I believe the system he describes/is used to allows a lot more fraud than required.
I most of this thesis can be summarized with a few points: (1) A perfect ROC (100% AUC) on fraud detection is impossible, (2) false positives have costs in both lost revenue and customer insults, and (3) the operating point with 100% fraud capture has an unacceptable false positive cost.
I thought the article was going to go in another equally compelling direction. If there is no fraud, measures to prevent it become lax because they are unnecessary costs. With no measures in place, fraud comes back because there is no cost to the fraudster.
You could also use this reasoning to say that the optimal number of rapes is greater than zero.
I would disagree with the world “optimal”. The optimal number of fraud and rapes is both zero, but unfortunately we don’t really have the realistic ability to achieve that.
Obviously the optimal number of rapes is 0, but the optimal amount of rapes we should try to prevent is not infinite, and thus the optimal amount of rapes we accept as a consequence of the above policy is non-zero.
It's really a simple cost-benefit calculation; the cost of preventing the last 0.1% rape on earth is surveillance cameras in every home and egregious violations of privacy, obviously the cost of such a scheme is probably not worth it.
The simple observation is that there are tradeoffs: in exchange of preventing <bad thing>, we have to give up <good thing>, at some sort non-linear curve. The cost of rape prevention goes up with each rape prevented, there reaches a point where the cost is no longer worth it and we should call it a day.
People can (and do) argue all day about the point where the marginal cost of rape prevention is too great, but I'm fairly certain most would agree that it's not infinite.
This argument is a naïve cost-benefit analysis, which is already a red flag, but on top of that it claims the damage is done primarily to business that can afford it, ignoring the fact that a non-trivial amount of fraud affects individuals.
> In the overwhelming majority of cases, that is where the waterfall ends. While insurance is available (both specialized chargeback insurance and general business insurance), overwhelmingly businesses simply absorb fraud costs in the same way that they absorb their office rent, staff salaries, and marketing expenses. That $10 to $20 billion number we threw around earlier? This is what happens to it, in the ordinary course of business.
This claim of “overwhelming majority” being businesses and being a marginal insurance-covered cost does not square with the fact that millions of individual are losing billions of dollars to fraud and suffering very negative consequences.
“In 2017, an estimated 3.0 million persons (1.25% of all persons age 18 or older) reported that they were victims of personal financial fraud during the prior 12 months. […] About 14% of financial fraud victims reported the incident to police. About three-quarters of financial fraud victims reported the incident to their family and friends (77%), two-fifths reported the incident to a company’s customer service (42%), and one-third reported the incident to their bank, credit card company, or other payment provider (31%). More than half of financial fraud victims said they experienced socioemotional problems as a consequence of the incident (53%). Financial fraud victims lost $1,090 on average and more than $3.2 billion in total.”
And what about the opportunity cost & lost potential to innovating better solutions to fraud? There’s no good reason to assume the cost to solve this problem is an ongoing expense.
> The reason for this is that Directors of Fraud are aware that the policy choices available to them impact the user experience of fraudsters and legitimate users alike.
I think herein lies the crux: All things interact, and if you think they don't you are just not aware of how. The game is identifying and moving the cogs that a) are either most important and isolated to get you where you want most efficiently or b) interact favorably in concert.
You win relatively by understanding this better than others. You win absolutely by seeing or creating an opportunity to implement a brand new cog.
Especially if the burden of proof of fraud falls mostly on the consumer. This is how it works, we don't know the actual ratio of fraudulent vs ok cases so we compare accross institutions. If one institution is an outlier than arbitrarily changes the acceptance threshold pushing the cost to the grieving consumer.
If on the other hand the cost of misidentifying a case fell on the institution then they would simply accept only personally identified payments e.g. sms or other 2fa at virtually no cost for them and effectively zeroing fraud
In some places with more modern banking, this is pretty common
From the title, I thought it was a reference to the book Lying for Money by Dan Davis. Anyways, the book is an brilliant exploration of this premise and also makes the case for why trust is necessary.
This is something I realized a long time ago when I was running specific site for a few years, we had some users that would abuse the system, there was a technical solution to this that would completely eliminate abuse but the systemic cost of implementing such a solution was just too high because it would affect the whole system in negative way. So you basically accept some amount of abuse/fraud/cheating until it starts to affect your business, this is optimal choice in most cases.
Heard an ad for a cybersecurity company yesterday and this same thought crossed my mind - how much business (and expertise) is generated to prevent cyber crime? Since the capital companies spend on preventing fraud likely far outweighs what the criminals actually earn, it could easily stand the cyber crime is a net positive for society given the job creation and technical know-how needed to fill those jobs.
If security was perfect fraud attempts would plummet. If they plummeted cutting corners on security would start to make sense. If companies got too relaxed with security again fraud would be incentivized once more, etc. It's like game theory, it's the reason we can't have nice things, it's because forces more fundamental than we realize have to have their ebb and tide.
Reminds me of Marx and his theories on the productivity of crime.
The criminal moreover produces the whole of the police and of criminal justice, constables, judges, hangmen, juries, etc.; and all these different lines of business, which form equally many categories of the social division of labour, develop different capacities of the human spirit, create new needs and new ways of satisfying them. Torture alone has given rise to the most ingenious mechanical inventions, and employed many honourable craftsmen in the production of its instruments.[1]
That is the broken window fallacy (https://en.m.wikipedia.org/wiki/Parable_of_the_broken_window) which was written in 1850, and Marx wrote the document you linked to in 1862 & 1863. Although I find Marx so impenetrable to read that I can’t even tell what his opinion or theory actually is. I would guess Marx read it, but he doesn’t respond to it, perhaps because in that linked document Marx says “For which reason all vulgar economists—like Bastiat…”. I also wonder what defines an economist as vulgar?
Fraud is waste. Businesses optimise for profit, and that optimisation often leads to some level of waste. No process is perfect.
The article seems to imply that there is a standard revenue/fraud curve.
But what if there isn't such a static condition and you could jump to a less fraud (higher revenue) situation with different technical measures?
So changing the revenue/fraud curve.
Like: 2fa (like an app confirmation) based on heuristics?
Yes, the fundamental statement is the same, but you changed the existing "rules"
I mean optimal amount of anything is non zero. Trying to get 0% of anything, so getting anything 100% pure is next to impossible. Near hundred (99.9 repeat n) is useful but is always not cost effective or possible as n tends to inf.
This is true for things like welfare fraud (and other anti-help conditions) as well, but unfortunately Inna quest for headlines, taxpayer money is wasted (and injustice performed) in a quest to take the level to zero.
Some scams in my country have been ongoing for years cause the amount of the scam is one unit below what you can report to the right authorities. You can report to the police too but that is useless.
This is where it would be helpful for people (such as this famous patio blowhard) to learn what cryptocurrency actually is and why it's not a joke. Some day maybe.
What an extremely, needlessly elaborate way of saying "security vs. convenience is a tradeoff." Indeed it is, and that's not a particularly novel insight.
"security vs. convenience is a tradeoff" is an extremely glib and meaningless aphorism that is instinctively innate to almost every living organism.
The statement obliterates the nuance of which tradeoffs need to be made and the cost and impact of those tradeoffs from an economic and social perspective that are foundational to being able to reason about risk.
I wouldn't put it that way, but I would agree with anyone saying that statement omits a lot of information. Sure, it does, and it's pretty much the most general and abstract possible way of saying that. My beef with the article is that, despite its truly gargantuan word count, it hasn't added any new information on top of that statement. Once you know the thesis of the article is "The optimal amount of fraud is non-zero because security is a tradeoff and you want users to have convenience," everything in the article is pretty predictable.
I would have liked to see, say, some nuts-and-bolts discussion od fraud handling in some particular industry -- that would be novel and interesting to me.
I think is an important point, but it misses things like verifying your transaction with your bank in an-easy-to-do-hard-to-fake-way. Like if you were sent to your mobile bank app after completing a purchase and had to FaceId verify that it was you, then fraud rates would essentially be zero.
Yes such a system is annoying, I know because we have something kinda similar here in Europe, but because all the merchants are using it, I have no choice but to go to a retailer who doesn't use the system (I probably would if I could, because I tend to use my computer to do things).
I think I agree with OP’s premise that driving “fraud” to “zero” is kind of a fool’s errand: some people, like Bender from Futurama, “just love crime, just love stealin’ things…da dah da”.
But for me at least, it grates more than a little whenever Self-Assured Tech Person With Logic and Statistics In Hand assures you, dear reader, that if you actually crunched the numbers instead of gobbling up pablum from the Washington Post like a lemming, would in fact realize the Free Enterprise Is Going Just Great.
The World Economic Forum has sufficient data to do a plausible “Social Mobility Index” on 82/195 UN-recognized sovereign states: and its just one of many data points that Capitalism Muzzled by Social Democracy is in fact what you want if “people having a shot at doing better than their parents in large numbers” is a priority.
I’m old enough to have watched the effects of the Operational Research PhD’s at Megacorp “optimizing” every angstrom of human joy and dignity out of living in a Free Enterprise Zone. You can’t do anything these days that involves commerce without bumping into this. Friendly dare for US readers: try invalidating a credit card number in a way that stops every recurring auto-pay that has barnacled itself onto your economic ship is forced to get you to re-auth it. Good luck.
So while driving “fraud” to “zero” might be silly, we can almost surely take a big whack out of it by making a salutary example or 1000 of companies that have “optimized” the right amount of paying OSHA fines rather than allowing bathroom breaks to “all of them”, or “optimized” the right amount of cheap and fast municipal fiber to “zero”, or the right amount of employees to force just below the “gets benefits” line to “whatever the maximum is”.
I worked in butcher shops and call centers and retail in the Clinton Administration, and boy were they after you for every dime. Having been an over-privileged techie for the last decade or two I’ve personally been largely insulated from how much worse it’s gotten since then, but the kids I grew up with for the most part haven’t, and it’s a little hard to regard the significant fraction of them with some “grey at best” side hustle as doing anything other than scamming the scammers who have Corporate Backing.
TL;DR: If your fraud-prevention measures are too stringent, you will alienate your honest customers. Relax just enough so that the losses to fraud are less than what business you would lose if you were any more strict.
Take, for example, many sites asking for the CVV code when using a saved card. In many cases, why?? If I supplied the CVV once and I haven't changed anything since what's the chance a subsequent order is fraud?
There's also the problem that some anti-fraud measures would have to be implemented by the credit card company but they're not the ones that eats the cost. I could see a market for a credit card with better terms but where you must approve every transaction with an app on your phone--but how do you make that work in the current marketplace?
I have a credit card that supports virtual numbers--but it's a pain to use. Their benefit, but a hassle for me.
> True, but we don't always get the balance right.
Agreed :)
> Take, for example, many sites asking for the CVV code when using a saved card. In many cases, why?? If I supplied the CVV once and I haven't changed anything since what's the chance a subsequent order is fraud?
As a fraud risk manager, I've seen this scenario way too often: Say you have your card saved on a merchant website - fraudsters can often compromise your login on said merchant site and go on a spending spree with all your saved cards (unless you ask for a CVV from time to time, that is).
No, it's quite possible to do a card transaction without the CVV, it's just considered higher risk. However, once a customer has shown they're real and it's shipping to the same address the risk is much lower, the chance that the card is stolen is minuscule.
This is similar to the argument that you shouldn't set a service-level objective of 100% availability. It's not achievable and people who claim that's the goal don't act as if it is - so it's better to talk about what amount of downtime is acceptable given the cost.
Couldn’t they just write the title as “businesses shouldn’t try to completely eliminate fraud” instead of trying to inflate their argument with this pseudo-academic bullshit? Seriously, “non-zero”? Is the “optimal” amount of fraud sometimes negative?
TTBOMK, there is not and has never been a system built by humans that other humans haven't been able to take advantage of for their own devices. It's more an issue of minimizing it and punishing it when we find it.
Do people who write these things really think these are novel concepts? The amount of arrogance and delusion required to state the obvious is hard to comprehend.
- The optimal amount of fraud in society is 0
- The optimal amount of fraud a business/industry should accept is non-zero
The simple observation that the cost to prevent each marginal fraud attempt increases; the last 0.1% of fraud costs way too much to prevent compared to the first 99%. Obviously society would be better off if fraud didn't exist, but since it does the effort expended is only worth it up until when the marginal cost of prevention exceeds an acceptable threshold (when it starts to lose you money).
The optimal amount of fraud is still 0, but the optimal amount of fraud prevention lies somewhere on the margin.
This is why important transactions like banking have KYC checks, and buying a pair of sneakers don't.