Hacker News new | past | comments | ask | show | jobs | submit login
No general method to detect fraud (calpaterson.com)
186 points by calpaterson on June 11, 2021 | hide | past | favorite | 101 comments



My approach to detecting fraud is to always assume the worst possible reason why someone did something or why something happened. Always be looking for the angle. You will detect fraud and manipulations and also a bunch of manipulations that aren’t there at all. In psychological circles this is called catastrophic thinking.

It is a hell of a way to live.


What about just applying that reason only when people want something from you.

Most people don't want anything from you. Some people like just chatting. Salesmen will feel smothering and uncomfortable because they want something from you. Advertisements trigger me because it's so obvious (in my mind) they want something from you. News feels this way too.

How can I tell? The same cues that fool people who focus on the surface-level: flashiness, bravado, loudness, taking the subject matter to things I do not care about. Trying to make me feel emotions (fomo, FUD, even excitement).

You could use it as dating advice: people are turned off when you want something from them too much.

Not all that glitters is gold.


Pickpockets work by just chatting with you while someone else does the work. Stay vigilant!


Except a smart scammer won't do it as a hard sell and look like they need you. They know as well as anyone that aggressive sales look suspicious. Madoff waited for people to come to him.


I was reading some Madoff stuff the other day - I think generally true, but he had his share of "one of a kind offer invest this week"


I don't think you're claiming otherwise, but just to be clear, this is generally perceived as a bad thing, psychologically, along the lines of OCD/paranoia/anxiety. This is not a healthy approach, and it's concerning that some responders below seem to think it's a good idea.


No, and to be clear, this is not a good approach to life. I was being a bit glib in my comment. I am not advocating for anyone to actually think in this way. I do indeed suffer from OCD/paranoia/anxiety, though it is currently well managed with medication.

For anyone that actually thinks in this way, as I do, I recommend seeking help. It took me half my life to actually get help and I realized after being on medication just how bad my anxiety was. It is good to be cautious, but when you see daggers everywhere it is probably time to get help.


depends if it gives you anxiety. I think the healthy approach is "Prepare for the worst, hope for the best"

I don't need to live in constant fear of the worst, but it helps to be prepared for it


That's not catastrophic thinking then. Catastrophic thinking tends to fixate on irrational concerns, like, "Did I run over a person on my drive home and not notice" or "is that stranger planning to kill me?".

"prepare for the worst" is super vague, because the worst is unbounded in implausibility.


Unlikely, but not impossible. That is how I would describe my thinking when not on medication. Worrying about if I ran over someone without realizing it is an exact worry I have had.


Do you mind sharing what medication has worked for you and any downsides you've experienced?


Escitalopram has worked well for me. I don't really experience any side effects, but it did take a few months to adjust and feel totally normal. Some sleepiness and a feeling like I could not concentrate fully during that break in period, but both of those subsided.


IMHO that's pretty far from OP' description of a viewpoint.


Good point. To add an exception to the original article then, perhaps the only general method to detect fraud is the trivial method: assume everything is fraud.

In practice though for most goals it's probably better to trade off Type 1 vs Type 2 error.


Someone said: “people immune to cultist recruitments don’t have friends”


I get the point, though in reality, the people without any real friends, are the most vulnerable to be dragged into a cult and once they are in - cannot go, because they would then be truly alone again.


I mean ... at some level everything today is some kind of "fraud" ... everything that is sold to you is not sold to you out of altruism, but because somebody else is living of that trade. That person in turn has to maximize the profits, which is why you never buy something at value.


> My approach to detecting fraud is to always assume the worst possible reason why someone did something or why something happened.

This goes against Hanlon's razor, though.


Catastrophic thinking is rarely based in good probabilistic thinking. Logically I know that, but my brain latches on to the worst possible rational outcome as the most likely outcome.

I do not try to think in this way. I wish I didn’t.


Human failure and weakness is something we have to accept, the response to it doesn’t need to be entirely negative. Often we can be empathetic in some way or another, which doesn’t exclude tough love.

Also we often underestimate others when thinking too cautiously or fearful. This can often be cured through direct interaction.

Whenever I feel overwhelmed by negative perspective/thinking. I try one of these angles. Sometimes it works, especially if I put effort into engaging with people that are involved.


Hanlon's razor is specifically used by malicious people to conceal their malice, and if you rely on it too heavily you will never detect fraud


Reminds me of the characterization of Harry Markopolos in Malcolm Gladwell’s book Talking to Strangers.

His takeaway is that society is a pretty bleak place when we all lose our “default to trust” mode of operation.


I remember a line from a manga called "Liar's game". The line goes "If you meet someone new, doubt them, question them and then you will understand them better". I think this should be the SOP of when you meet anyone new.


> My approach to detecting fraud is to always assume the worst possible reason why someone did something or why something happened.

> It is a hell of a way to live.

True, but it is the honest way to live. Especially employment/management relationships are all about manipulating, cajoling, and ultimately threatening (by homelessness and starvation) people into doing things they would not otherwise do. Things are not much better in private/social life, except maybe within some families / friend groups or between lovers during the first year or so. Life is so bleak.


The solution is worse than the problem if you live with this attitude.

In addition it is too reductive and doesn't represent the very complex (even enjoyably subtle) interactions that go on between humans who do want something from each other.

Life has many illusions and it is up to you which one you pick. The reality you are enduring is just another illusion you've settled on: except it's a crap one.


People do things for reasons. That doesn't make life bleak, that just means you should be careful about incentives in your relationships. Shun people who have an incentive to harm you, embrace people who have an incentive to help you and seek out people who think that their best chance at an easy life is making those around them prosper.


Not all relationships are like that. Customers buy my software because it provides some value to them, I sell it because it provides an income. It's a relationship that works to our mutual advantage.

I don't manipulate them, there are no dark patterns on my web site, I don't hold their data hostage or nickel and dime them.


> A surprising number of employees get duped into accepting part of their pay packet as options in their employers' private (and highly illiquid) stock

I'd not heard this stated so clearly before.


Can I get a second opinion on this (from someone here who feels they know, that is)? Say the stock option is at a call at a high valuation, but you know all exits would be higher, is the underlying stock that much more valuable?


As long as trust exists, fraud will exist. The only way to eliminate fraud is to eliminate trust, and trustless societies are pretty dystopian.


In my experience working in an insurance claims department, we overlooked a lot of fraud, intentionally. Usually, only the large, extremely obvious cases with plenty of evidence were pursued. I believe a lot of the soft fraud involving slightly inflated claims went undetected.

Part of this is because you have to pay the salaries of the investigators, so there is a cost of detection. It's not worth recovering an amount that is less than the cost of an investigation. Secondly, a company will lose customers if it has a reputation for having too many false positives or a default stance of mistrust of their policyholders.


I'm not sure. Reducing necessity for trust is a worthwhile goal.

It's wonderful to not have to trust too much. Instead to rely on evidence and have a contingency if things go horribly south.


Is that possible? Every system we use has to be trusted at some point. I don't trust the internet so I use https but that is only really saying that I trust the IETF or my certificate provider and I (personally) have no realistic way of knowing whether either is trustworthy.

Did the IETF miss a vulnerability when they inveted the protocol? Was there someone on the team planted by the NSA? Do the cert authorities provide the means to hack my data in some way?

I don't think they do but that is because I trust them that I don't think it would be in their interest to do this and I think that if the IETF messed with the protocol, someone would have noticed but we have seen OpenSSL bugs that have sat there for years because it was too complex for most people to understand.

It just sounds like turtles all the way down because ultimately you have to trust someone or something even if it is just time and experience.


Reducing the amount of places where you have to trust, and then demanding more transparency from those places helps create a society where less trust is needed, and fraud has fewer options.



Trust is great and it's enabled by reputation systems.

https://ncase.me/trust/


I'd characterize it this way: Reputation systems don't enable trust, but are an attempt to algorithmically quantify it. To paraphrase the old aphorism, all reputation models are wrong but some are useful. Unfortunately, they're inevitably abused by the untrustworthy that they're supposed to filter. Madoff had increasingly-good reputation signals until he didn't.


It’s still trust, just shifted to something else.


It's actually more trust, because any bad consequence is reduced. It feels like less because it's easier to trust, but the real world impact is of the real thing, that is more.

That's why an effective police, punishment for business crimes, and law enforcement even when the person moves into a different place are important. A more trusting society is a happier and richer one, so we should make it easy to trust.


¬¬


>By switching, you have a 2/3 chance.

I guess I need to read up on the Monty Hall problem again, because I always thought it was 1/2 chance if you switch (vs 1/3 chance if you don't).


I find the Monty Hall problem becomes intuitive when scaled.

100 doors, 1 car, 99 goats. Pick a door. All doors open (with goats except 1) ... Do you think you picked the 1/100 door with the car, or that it's the other only one left standing.


Exactly. It also helps to emphasise that the eponymous host knows which door contains the prize and will never open the prize door before asking you whether you want to switch.


The variants where the host does not know are also worth considering. If they’re just guessing and happened to not open the prize door by chance, you’re back at 50/50. If they have a hangover and you think there’s a 5% chance they forgot where the prize is and still just happened to open the doors correctly, well that makes the math even more interesting.


As long as the host doesn't accidentally open the prize door, it doesn't matter whether he forgot or not. To test my statement you could write a simulation where the host randomly opens a door.


That's not right, and it's one of the more confusing parts of the problem in my opinion.

I can make sense of it by drawing out the probability trees and see that it's 1/2 rather than 2/3 in this case, but it actually sticks with me if I think more like so:

My prior is that there's a 1/3 chance I picked the car; that's the world where the host can't reveal anything other than a goat.

In the scenario where the host then reveals a goat intentionally, my prior doesn't change. The host can always reveal a goat. This gives me no information. Therefore, it's still 1/3 that I picked the car, so the remaining door must be 2/3, so I switch and get a 2/3 chance.

But in the scenario where the host reveals a door at random, and it happens to be a goat, it's time to update my priors. Since he didn't accidentally reveal the prize, the probability that I'm in the world where he couldn't accidentally reveal the prize is increased relative to the probability that I'm in the world where he could have.


See my simulation code.


Yeah, you still win 2/3 of the time. Some of those wins come from the host accidentally showing you the prize, which is prior to your decision.


Completely false. Draw up a probability table for the case where the host picks a door at random (so 1/3 times they reveal the prize) and you'll see.


What happens if the host reveals the prize? Is the game repeated? I'm saying as long as he doesn't accidentally open the prize door, it doesn't matter. If you insta-loose in that case, we're talking a different game.


> What happens if the host reveals the prize? Is the game repeated?

Maybe. Maybe you insta-lose. Maybe you insta-win. It doesn't matter. (But by definition if the host picks a door at random, there is a possibility that they will pick the door with the prize behind it, so something must happen in that case).

> I'm saying as long as he doesn't accidentally open the prize door, it doesn't matter.

But that's false. If the host picked the door to reveal at random then your chance is 1/2 if you switch and 1/2 if you keep your original door. The 1/3-2/3 case only happens if the host deliberately picked a door that didn't have the prize.


See for yourself:

  import java.util.List;
  import java.util.Random;
  import java.util.stream.Collectors;
  import java.util.stream.Stream;
  
  public class MontyHall {
    public static void main(String[] args) {
      Random random = new Random();
      final boolean GUEST_ALWAYS_SWITCHES = true /* change guest strategy */;
      final boolean HOST_IS_DRUNK = false /* simulate host knowledge */;
      final boolean GUEST_WINS_IF_HOST_PICKS_PRIZE_BY_ACCIDENT = true;
      final double GAME_COUNT = 1_000_000;
      double winCount = 0;
      for(int i = 0; i < GAME_COUNT; i++) {
        List<String> remainingDoors = Stream.of("A", "B", "C").collect(Collectors.toList());
        String prizeDoor = remainingDoors.get(random.nextInt(3));
        String guestInitialPick = remainingDoors.remove(random.nextInt(remainingDoors.size()));
  
        String hostPick;
        if(HOST_IS_DRUNK) {
          hostPick = /* host picks random */ remainingDoors.remove(random.nextInt(remainingDoors.size()));
        }
        else {
          // host is sober and always opens empty door
          hostPick = remainingDoors.get(0).equals(prizeDoor) ? remainingDoors.remove(1) : remainingDoors.remove(0);
        }
  
        if(hostPick.equals(prizeDoor)) {
          if(GUEST_WINS_IF_HOST_PICKS_PRIZE_BY_ACCIDENT) {
            winCount++;
          }
          continue /* insta win/loss */;
        }
  
        // guest decides to stick with inital pick or switch
        String guestFinalPick = guestInitialPick;
        if(GUEST_ALWAYS_SWITCHES) {
          guestFinalPick = remainingDoors.get(0);
        }
  
        if(guestFinalPick.equals(prizeDoor)) {
          winCount++;
        }
      }
  
      System.out.println(String.format("Win rate: %.2f", winCount / GAME_COUNT));
    }
  }


Try setting GUEST_ALWAYS_SWITCHES=false, you'll find your winrate is still 2/3.

In the if(hostPick.equals(prizeDoor)) block, rather than maybe incrementing winCount, you need to decrement GAME_COUNT (or rather the number that you're going to divide by at the end - don't reduce the number of iterations) - we're talking about the probability when you're deciding whether to switch (so after the host has already opened the door and not hit the prize).


My statement was that if you always switch, it doesn't matter whether the host is blind or not. But I must say it is an interesting find of the simulation that your win rate is always 2/3 independent of your strategy (provided the host is blind and you win when the host makes a mistake).

I'm always talking about the probability of winning the complete game when sticking to a certain strategy. I never look at intermediate probabilities.

Decrementing the game count or the divisor would be the like crossing out or ignoring decision tree branches.

If you change the game rules and say the game is repeated if the drunken host picks the prize, then the simulation would have to repeat until the guest has either won or lost, but the game count would still remain the same (= the number of times a simulated guest is invited to the game show).


> My statement was that if you always switch, it doesn't matter whether the host is blind or not.

Well, it does matter unless you have the rather unusual rule that you win if the host reveals the prize - normally it would make a lot more sense to say you lose in that case. (In fact this is the deep reason why switching works in the original problem - if you switch, then you win if a blind host "would have" revealed the prize, whereas if you don't switch, you lose).

The relevant general fact is that if the host is blind, your strategy never makes any difference.

> Decrementing the game count or the divisor would be the like crossing out or ignoring decision tree branches.

Well, there is no decision to be made if the host revealed the prize - you never get the choice of whether to switch or not. When we ask about the probabilities after a certain choice, at the point where you're making that choice you already know that you've certainly had to make that choice. Otherwise could equally well say that you have to include the probability of getting through to the final round, the probability of getting onto the gameshow in the first place, ...


You said: "I can make sense of it by drawing out the probability trees and see that it's 1/2 rather than 2/3 in this case"

(Edit: sorry that was not your statement, but that from furyofantares.)

Now you agreed that it is still 2/3, so I still stand by my statement "As long as the host doesn't accidentally open the prize door, it doesn't matter whether he forgot or not."

> Well, it does matter unless you have the rather unusual rule that you win if the host reveals the prize

If you change the game and say the host is blind, you have to also make a rule about what happens if the blind host reveals the prize. Otherwise how could you decide on a strategy as a guest?

> Well, there is no decision to be made if the host revealed the prize

No chance for the guest to switch, yes, but it is still a branch in the whole probability tree, and you have to count it either as a win or a loss for the guest. They can't just all go home and pretend the show didn't happen.

> Otherwise could equally well say that you have to include the probability of getting through to the final round, the probability of getting onto the gameshow in the first place, ...

The problem statement is: "What strategy (stick or switch) is best for the guest (= has higher probability to win), provided he gets into the game show final." No ambiguity here.


> If you change the game and say the host is blind, you have to also make a rule about what happens if the blind host reveals the prize.

The most natural rule is that the guest loses: the guest picked a door that didn't have the prize, they get what's behind "their" door. (And note that the much-argued formulation already "changes" the game from what was done on the original gameshow).

> The problem statement is: "What strategy (stick or switch) is best for the guest (= has higher probability to win), provided he gets into the game show final."

Why "provided he gets into the gameshow final" and not "provided he gets into the gameshow final and is offered the chance to switch"? Like, if you're asking whether a chess player should take an en passant capture, you'd look at whether capturing or declining en passant leads to winning more often, you wouldn't look at all possible chess games (including those where there was no chance to capture en passant) because that just adds a bunch of irrelevant cases.


> The most natural rule is that the guest loses

I don't want to argue which rule would be more natural or make a more interesting show. But without a clear rule, I cannot decide on a playing strategy.

> Why "provided he gets into the gameshow final"?

Because I assumed that the host always has to open a door and give a choice to switch.


> I don't want to argue which rule would be more natural or make a more interesting show. But without a clear rule, I cannot decide on a playing strategy.

It doesn't make any difference to your strategy! You don't get any choice if the host opens the door and reveals the prize, so your strategy can't possibly be affected by what the payoff for that scenario is - even if you win 10x the prize if that happens, or get executed if that happens, it makes no difference to whether you should switch or not in the scenario where you actually do get given a choice.


I agree you do not need a strategy if the rule is insta win/loss.

But if the rule were to reset the game until the host doesn't make any more mistakes, it matters.


No it doesn't. If the host chose randomly and revealed a non-prize, i.e. the point where you're making the choice, your odds are 1/2 for either door and your strategy doesn't matter.


There's a third rule set where switching always makes you lose:

Guest selects a door. If it has a goat behind it, the host opens that door and says "you lose". If the guest initially picks the car, then the host opens a different door and asks the guest if they want to switch.

For the player, if you've reached the point that the host is offering you to switch, there's no way to know whether you are in the case where switching improves your odds to 2/3, keeps them the same, or decreases them to zero.


> My statement was that if you always switch, it doesn't matter whether the host is blind or not.

You responded to someone who said it's back to 50/50 once a random host reveals a goat.

I thought you were contradicting this or adding to it, but if you were saying it doesn't matter if you switch, then yeah, that's what 50/50 means.


Beautiful short explanation. I always struggle getting this point across in discussions.


My hopes for this explanation were too high. I just tested it on my cooworkers. No luck. I reverted to suggesting to write a short simulation script to convince themselves by experimental data.


I personally find the Generalized Monty Hall problem as unintuitive as the original one. But I belive that I found a batter way to make it intuitive (at least for people that have heard about classic and conditional probability). And it event worked on one of my friends :)

If Your first choice was lucky and two goats remained in other gates than Monty does not change anything by showing You the goat in one of them (it't the same as if Monty selected random gate).

Things becomes intresting if You was unlucky and selected Goat. Monty MUST show You the other goat (because there is only one available for him to select). And thus he introduces information to otherwise random selection (it stops beeing random).

And by doing this he eliminates for You conditional possiblity of selecting second goat when You selected one in first round (and in this conditional scenariu swiching is sure bet which changes the overall odds to 2/3)


It's a 1/3 chance if you pick before the second door is opened and do not switch.

It's a 1/2 chance if you pick after the second door is opened, regardless of switching.

It's a 2/3 chance if you pick before the second door is open and switch after the second door is opened.

I recommend reading again, because I don't want to write up the logic.

Of a bigger concern is that this was not true on "Let's Make a Deal". The problem assumes that Monty always offered the option to switch. In reality, he did not, and the offers were not random. That corrected for the issue to a large degree in practice.


That doesn't add up. Literally, 1/3 + 1/2 does not add up to 1. What third option exist that has a 1/6 chance of happening?

The basic argument is. If you don't switch, you have 1/3 chance of winning. So if you do switch you have 1/3 chance of losing. Hence if you switch you have 2/3 chance of winning.

How the door you switched too went from 1/3 chance to double is the hard part to explain. But it must be the case.


It's a 1/2 chance at the point where you're considering the switch.

If you start the game with the intention of switching, you have a 2/3 chance of being successful because the winning strategy is to miss the car on your first pick.


No, I don’t think this is right.

Regardless of your intended plan, you only had a 1/3 chance of picking correctly the first time, so switching gets you a 2/3 chance.


No, it's a 2/3 chance at that point.

When you choose initially, you have a 1/3 probability of getting the right one, leaving a 2/3 probability that the car is on one of the other two.

The host reveals one of the other two. So that 2/3 probability applies to the remaining door. Here is a short C implementation that made it very clear to me...

  #include <stdio.h>
  #include <stdlib.h>

  int doround () {
    int car = rand() % 3;
    int firstchoice = rand() % 3;

    // host reveals one of the goat doors

    if (car == firstchoice) {
      // you changing to the other door after the reveal is a loss
      return 0;
    }

    if(car != firstchoice) {
      // you changing to the other door after the reveal is a win
      return 1;
    }
  }

  int main(int argc, char** argv) {
    int wins=0;
    for (int round=0; round < 1000; round++){
      wins+=doround();
    }
    printf("Worked in %d of %d rounds\n", wins, rounds);
  }


It's been a while since I actively thought about the Monty Hall problem, I thought I understood it intuitively back then. I thought it became one of those "oh yeah, that's unintuitive, but once it clicks it's fine" things for me.

I thought through it again, and I'm angry now.

At least I'm not alone (got this link from the article): https://web.archive.org/web/20140413131827/http://www.decisi...


That would mean you have a 5/6 chance of finding the car if you were allowed to pick both doors. You can see the problem with that. In reality, it's 2/3 and 1/3 respectively.


If you started with 4 doors, where 3 of them had goats, wouldn't you have a 1/4 chance of picking the car the first time and a 1/3 chance of picking the car the second time?


1/4 chance if you keep your original door, 3/4 chance if you switch (assuming the host opens two doors to show goats). If the host opened one door (so you have your original door and two other doors to pick from) then it's a 3/8 chance if you switch, if that's what you're asking.


Yes, I was asking about the latter, thanks!


When using the switch strategy, you win if you choose a goat initially. Since there are two goats and one car, this is a 2/3 chance.


Door A | Door B | Door C | —————————————————————————- Car. | Goat. |Goat.|

Goat. | Car. | Goat.|.

Goat. | Goat. | Car.|.

The above tables shows all possible distribution of car and goats. If the initial choice was Door A, then you only have 1/3 probability. But after B/C door is opened the probability increases to 2/3 if switched.

The clue is that the host will always open the door with a goat.


I have read and own the recommended book at the end, Financial Shenanigans. Anyone who has ever had to measure up to some imperfect KPI will find it relatable.

My way of personally looking for fraud is to look at incentives. What might this person stand to gain if I take this chosen action and do my incentives align in any way with his?


This doesn't work in some fraud cases. If Madoff is selling you an investment, it is obvious what you think he gets out of it, he invests your money and keeps a proportion and that is normal business right?

As the article says, his clever angle was not to over-inflate the ROI of the investments which might well make them look implausible.

What is harder to spot is whether an investment is actually taking place.


Is there any KPI that is perfect? I think you could find a valid criticism of any KPI.

The incentive view is a good one, and one often followed by auditors, starting with the pay of company accountants!


Makes you wonder if having a lottery system for firing people in a company at random would perform better than KPIs, or at least provide a baseline for KPIs to be measured against.

8-12 months severence / perf bonus, glowing reference for taking one for the team, and placement services. It's just like normal attrition, but makes planning a scam and consolidating a lot of anti-patterns way less viable.

It seems psychotic, but compared to the incidence of mendacity, it's a pretty good deal for everyone.


agreed. Who wants to have unlucky employees? Get rid of them by using the RNG!


https://en.wikipedia.org/wiki/Benford%27s_law

This is a good way to detect fraud. Basically, any number set will tend to have more 1,2,3's vs 7,8,9. Fraudsters will use random data that doesn't follow this.


> I think confirmation bias afflicts aficionados most of all.

Interesting to link this to https://news.ycombinator.com/item?id=27467999 (hedgehogs/foxes and "one big idea"); the people with one big idea, who are aficionados of that idea, are easier to exploit because they're using the idea to avoid being contingent: that is, they're not looking at what's actually happening, merely whether they can fit the idea to it.

Following this, a lot of modern social media culture war consists of (a) selling you a universal idea that explains everything you see in the media and then (b) recruiting you into a cult which uses your support (money, votes) for their own gain. The ur-example is of course Nazism selling the idea that Jews were the problem to which they had the "solution", but you can see analogies to this all over the political spectrum.


> you can see analogies to this all over the political spectrum.

No, you can see analogies of this specifically on the right, which is just doing the same thing still.


If you create the right atmosphere, you will find your junior employees will become very adept at keeping a suspicious mind. This is endemic in most accounts staff, but only if you encourage it.


Mods: Please consider changing the title. It makes it seem like this awesome, grounded, helpful, 100% licit advisory outlining clever examples of hard-to-detect fraud is instead some kind of NSFW nonsense. So glad I RTFA; hope others will too.


Thanks: glad you liked it. he original title is my fault as author because I picked it and it took me a while to realise that it was so terrible. Sorry about that.


If it weren't for this comment, I would have missed this one. Thanks for the flag, and a +1 to this being worthwhile.


I agree, this was a fantastic read.


And here I was hoping for a tutorial on unusual sexual maneuvers. So yes, it should be fixed so nobody else is disappointed. (I'm actually semi serious, if you're wondering)


The title is editorialized (and I'd say egregiously so): the title of the article is "No general method to detect fraud"


Seems like the author of TFA submitted it to HN, so they editorialised their own title.


We cut authors some slack about that, but not when the submitted title is clickbait. We've changed it now. (Submitted title was "Slick Tricks for Tricky Dicks". That's one hell of a stretch.)

"Please use the original title, unless it is misleading or linkbait; don't editorialize." https://news.ycombinator.com/newsguidelines.html

The article itself looks good though!


Sorry about that, I changed the title after posting it here, when I realised how terrible it was. It wasn't really intended as clickbait though and in fact it actually functioned as anti-clickbait. As a few people have said, it looked like a "risky click". :|

Glad you liked it though.


> I changed the title after posting it here

Ah, like the New York Times. No worries :)


I'm very sorry about that. The submitted title was the one I originally had but it took me a while to realise that it wasn't very good. I changed my mind about it but you can't edit titles on HN submissions after publishing.


> The best defence against frauds and scams seems to be a kind of "intellectual vaccination" via repeated exposure to benign, non-functional specimens.

Similarly, ransomware is doing us all a favor. Every ransomware hack exposes and closes a vulnerability that could do much more damage in the hands of a truly malicious actor such as an enemy in wartime.


Ransomware isn't a "benign, non-functional specimen" :). It has real harms and they can be quite serious.

I suppose that - always dangerous to extend an analogy - ransomware is more like "intellectual variolation" than vaccination.

I wonder what would happen in wartime though.


Random, sparse ransomware attacks are absolutely benign compared to what a wartime enemy would do. If you don't think so, you haven't properly imagined the serious, extensive, real world damage that could be done by coordinated simultaneous attacks. Look at Stuxnet's destruction of machinery and then imagine that applied to a wide swath of the economy, including basic infrastructure like water and power, all at once.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: