The most unfortunate thing about this whole situation is that it was poor Chad himself who ended up discovering and shutting down the fraudsters. This should not have been the case, and I apologize on behalf of my former employer. I sincerely wish I would have been able to help catch this before it got out of hand. (Disclaimer: I am the former Operations / Support / Fraud Investigator for Balanced Payments).
As it turns out, the CEO of BalancedPayments is (there is just no nice way to put this) an unethical bag of scum. He recently went on some kind of insane power trip, completely disregarding the needs of his customers, putting me on unpaid leave for ... reporting an incident of fraud to a bank. I reported an incident exactly like the one Chad discusses here, but the dollar amount stolen was much higher, and the fraudster a repeat offender.
Anyway, after that last meeting where he was sneering and enjoying way too much the power trip of getting to "fire" somebody, I can confidently exhort that Balanced should not be trusted.
It's important that any company a marketplace entrusts its financial data with is an ethical one. So, yeah, looks like I'm on the job market; ping me : http://lnkd.in/NuBGDY
There are so many things wrong with this post, I would strongly advise you to delete this. Besides the libel, it doesn't really paint you in a good light either, especially if you're going to be looking for a job. I would suggest keeping your dirty laundry off the Internet, and delete this post.
I for one appreciated the info. But you're right: shawnee should definitely consider trying to hide the hate, because even well-justified and 100% righteous anger can prevent you from landing the perfect job you want. Pretend you're a zen master; a lot of potential employers only want people who have never been victimized like this at work, particularly in their leadership roles. I speak from experience: it's better to paint a picture of a clear, strong, victorious past than a picture including the truth of having worked for unscrupulous jerks who you had serious disagreements with.
On the other hand, the world would be a better place if we didn't all have to make nice-nice and pretend, so rock on!
People are way to uptight about stuff like this when it comes to finding your next job. It's true that it will close some doors, but it will open others. And, for the vast majority of openings nobody will ever know.
I enjoyed reading it. I doubt this will come back to haunt the author with respect to future work. If he's good at programming, pretty much anyone will overlook his "scumbag" remark. After all, who identifies with the group "scumbag" and will be offended?
This is, sadly, false. It may be true that anyone cool would ignore such things, but there are plenty of companies and recruiters out there that will consider this kind of thing a red flag and downvote people during the recruiting process. (Edit: this comment has been getting upvoted and downvoted in equal proportions, which kind of supports my point. It's an ugly truth, what I'm saying here.)
Agreed - whether or not the CEO is an "unethical bag of scum", this person just made some colorful remarks about the CEO, and linked their real name in the same comment. All on Hacker News, which could be portrayed as a source of news for the industry, thus making the comment damaging.
This is very dangerous ground, and could be a case of libel if the "unethical" CEO catches this.
I'm pretty sure libel and slander are really hard to actually get judgements on. The information can't be true, and the victim has to be able to demonstrate damages. There may be other technical factors as well, such as whether the victim is famous and whether they rely on having a clear name to exist in the world. I'm not a lawyer, but my understanding is that people get more nervous about slander than they really need to.
Not in the UK. The defendant must prove that the statement is true[1]. This is why the UK is a prime libel tourism destination, as the cost of defending the statement is often prohibitive to an individual.
Being a statement on the internet, I believe this case could be tried in the UK, hence the tourism aspect.
Ridiculous libel judgements in the UK will not be enforced in the US if the judgements violate the First Amendment.
I'm just waiting for day when libel tourism results in no one visiting the UK -- actual tourism -- because everyone has UK libel judgements against them.
This CEO would have to prove that they behave ethically, and then extrapolate the HN viewership with potential customers and thus lost revenue and/or a loss of brand equity.
This is certainly not a stretch for civil court, nor would it be inexpensive for either party involved.
I'm no lawyer, but I'm pretty sure that any US court would consider "unethical bag of scum" to be a statement of opinion, and thus not covered by defamation laws.
I think what steve8918 is trying to say is that a comment such as one made by shawnee_35 is not seen in a good way by some people, and I agree with him. However that does not mean that people shouldn't say what they think about someone but calling names is not a good sign.
No, this is vindictive: http://freespire.com/. That's a site set up by a former exec of a defunct company, attacking its former CEO. shawnee_'s comment is exploding with tact by comparison, and any HR drone who doesn't see the difference doesn't deserve their position.
Yes, you can find more vindictive things: there are people out there who have not only set up a silly website, but have made it their life's mission to stalk and terrorize the people they dislike; there are even people who have captured and tortured the objects of their hatred/revenge. Obviously, this is a massively different situation than simply calling someone names, and anyone who fails to recognize that fact is being dense: these websites and comments are, after all, "harmless name calling".
However, that really doesn't excuse the behavior: just because you can find someone worse than you, making you look good in comparison, doesn't mean that you are actually doing well... "have you seen Steve? he failed that class; in comparison, my C is great! if you really can't see the difference, you don't deserve to be a teacher" <- this statement isn't even false, as a C really is massively different than an F... at least you tried... but is it really something we should be happy about? Seriously?
I am not certain whether I agree with that; I might, actually... I certainly would if it were in private, or in context. However, that doesn't make the argument "if you think X is Y then you should check out Z: it is so much more Y you will stop calling X Y by comparison" a useful argument: you can almost always find more and more extreme examples, and eventually you are saying "at least he's not Hitler; you can tell the difference, right?".
There is absolutely no doubt that the payment processor should have caught this way earlier. It is in their best interest.
That said I am not sure how BalancedPayments is built. By that I mean how much risk they are taking in being part of this payment ecosystem. It looks like they hold the money in their own bank account so BalancedPayments themselves uses another processor to do their payment processing. The other alternative is that they are big enough to communicate directly with the backend network like IPPay, First Data Omaha, or FNBO...but they don't seem that large.
These claims, valid or not, are tangental to this situation, right? I mean, is there something specific to how Balanced handles payment processing that would be conducive to the kind of fraud alleged in the OP? It seems processor-agnostic.
In my experience, both firing someone and being fired are fraught exercises, and can generate a lot of heat on both sides of the relationship. I'm not going to leave Balanced just because they're clumsy at HR. If you think there are structural issues with regard to fraud prevention, that's a much more serious charge, and would need to be substantially substantiated, probably in a new thread.
My company (Sift Science) helps sites fight credit card fraud. We work with a few large ($100m+ revenue) marketplaces, and here are some things I've learned.
First off, strictly speaking, this is most likely to be a stolen credit card (i.e., fraud) rather than money laundering. You do NOT benefit from fraud, because when the cardholder notices the charges, they'll call up their bank and issue a chargeback. The $488.15 in your account will actually be removed and given back to the original cardholders. In addition, each fraudulent charge carries a $15-$25 fee, which you're liable for.
https://www.balancedpayments.com/docs/testing#chargebacks---...
What's worse, chargebacks can take 60-120 days to reach you, since there's delay at every step: the customer's bank, the credit card networks, your payment gateway, and the acquiring bank (your bank). Unfortunately, that means you won't know how much fraud you have today until February (!). It's a broken system, but that's how all the major card networks work, so it's something that everybody who sells online has to deal with.
If your fraud rate is higher than about 2% for two months in a six month period, Visa and Mastercard reserve the right to block payments entirely to your (or Balanced's) account unless you prove you can get the chargeback rate down. This is called an "excessive chargeback program."
In terms of heuristics, fraudsters adapt rapidly to whatever counter-measures you use. The half-life of a good heuristic is maybe a couple of months. The best approach is to evaluate hundreds of different signals, using a machine learning algorithm to constantly adapt to changing fraud patterns. My company is running a private beta of exactly this technology and we're happy to help: http://siftscience.com. Even if you don't use us, I can recommend other services or give you general pointers.
Hope that helps! Let me know if you have any questions: brandon@siftscience.com.
Thanks Brandon! Great info. If you see a way for Sift Science to add value to Gittip then I'm open to a proposal. Balanced won our business by stepping forward and contributing the integration themselves:
Do you have any data on what percentage of fraudulent charges get past your system. (i.e. of all fraudulent charges received how many does your system not catch) and what percentage of your fraudulent charge alerts are non-fraudulent? Just curious! I'm a co-founder of an eCommerce company and our average transaction is around $5,000 so this is pretty important to us and we already have some pretty strong systems in place, but I'm curious how well this more automatic approach works.
Good question. We measure these using "precision" (of the users that users Sift flags, which percentage are actually fraudsters?) and "recall" (what percentage of fraudsters on the site does Sift flag?). We can get 90% precision or 90% recall, although not currently both at the same time, and it's the customers choice as to which to optimize for. We can just adjust a threshold to tune our system to their needs.
Companies that have high transaction amounts often use the machine learning system to detect likely fraudsters, but then have a human review each one and make the final decision to approve/deny. We have a visualization "widget" that shows the reviewers which signals made a particular user look suspicious. The advantage of using machine learning is then that you: a) catch fraudsters you wouldn't have noticed otherwise, b) don't have to review every single transaction, just the subset that are most suspicious, c) make it faster for your staff to review transactions since the visualization tools will help point them at what to look at.
This isn't money laundering (from your initial github ticket its obvious that is what you were looking for, so thats what you found).
Before selling stolen credit cards, bad guys have to verify them. This is often done with small (<$10) donations to charities or small purchases of intangible goods that are considered low risk merchants.
With Gittip they found a way to get the low dollar amounts to come back to them, but since this wasn't really the goal to start with, you'll likely see donations to random leaderboard members that are unaffiliated with the fraud itself in the future.
I've supported a number of different online credit card donation forms for various charitable and other causes, and you see this behavior of card testing whenever you set the minimum allowed donation too low, and adopt too few of the necessary precautions.
I wrote a post on the approach to raising the bar I took - it really doesn't require much to get the credit card testers to go away, and if you don't get rid of them rapidly, you'll be dealing with chargebacks from here until eternity:
Most small online businesses do not store credit card data locally, but that doesn't stop you from using salted hashes of credit card numbers to compare.
Storing a "salted hash" of a credit card number in the manner you describe is only fractionally better than storing the credit card number itself. This is because credit card numbers have very little entropy - less than 36 bits per issuer code, so bruteforcing these hashes can be done very quickly.
Even then, I'm not sure I would trust this approach. I feel much more comfortable white-listing accounts, and for the time being that's not too onerous.
For privacy reasons, we've been hoping to not track IP addresses
You could hash the IP address, with some suitable salt. Then compare against that.
The purpose of storing IP addresses isn't to find out "the IP address of the user submitting the form", but instead to answer "How many other credit card numbers have come from this address?", something that can be done with sha512("salt_mc_salty_$IP")
The only problem here is, IP addresses are such a small space (4 billion addresses) that it's so easy to brute-force the entire database that I don't see it offering any protection. If the data is stolen it will be cracked in no time, and if the data is subpoenaed that cost will likely be ruled as insufficiently "onerous". Even IPv6 doesn't save you, since the space is sparsely populated.
No, with IP logging it's all-or-nothing. You might as well store them as uint32/uint128.
You are kinda of right, but I guess you missed some point here.
For the attacker to be able to brute-force, he would require the salt value. So it would be important to make sure it's not in plain-text. And of course if Chad wants to do that, he can easily build a rainbow file, but he can make it easier for himself by just lying about storing the ips.
Now another problem with this approach, would be the ability to change the salt. The moment it's changed, all data is lost (or meaningless). So in order to make it secure, it would be very very long, and unpredictable. Also encrypted.
Another thing, if an attacker would access the server where the code is running, if it's in the memory he would get it. So when it's in memory, it should only reside there when it's being used, and destroyed immediately. So it makes it harder to the attacker to get it, (until the moment it's used).
But come to think of it, if the attacker is that good, I think he would be interested in other things, like things that would get him more money than a list of IP addresses. :-)
It's not as clear cut as that. With suitable salt and suitable (long) hashing function, you can delay
From a security / data privacy angle, things are rarely 100% perfect or 100% broken. Just because an approach is not 100% perfect, doesn't mean that it is worthless. It can still offer protection of sensitive data.
Storing IPs in the clear in a DB means that if anyone gets any access to it (e.g. SQL injection type attack), they can have the whole lot. With salted IPs it's harder and much longer before they have any decent data.
If you tweaked a hashing algorithm to take circa 100 milliseconds to hash an IP, then "brute forcing" would be much less of a problem because it would take about 13 years to hash the whole lot.
>If you tweaked a hashing algorithm to take circa 100 milliseconds to hash an IP, then "brute forcing" would be much less of a problem because it would take about 13 years to hash the whole lot.
Or $31,000 on EC2. Are these logs per-request or per-transaction? The former could get awfully expensive.
Of course, checking a single target IP address would be trivial. Whether that matters depends on their threat model.
It seems most of the backlash around that was due to you suggesting you publish IP addresses. Since you are already taking credit card details I don't think people would object to recording IP addresses to prevent credit card fraud (maybe only record IP addresses for credit card transactions not general usage of the site). If someone wants to anonymously contribute they already have to get an anonymous credit card, getting around an IP address block would seem trivial.
I would also, advise against your plan to just white list givers. There are already too many barriers to contributing. I would suggest just charging and holding onto money when the transactions seem dubious. Do you also get charged the cost of fraud? because if you don't I would just charge the credit cards and forget about it and let your provider do their job.
Are the fraudsters using the charity website itself for confirmation that the card works?
Could you run them off by just always displaying success for any sane-looking small value donation, without leaking the result from your payment processor?
I've had the exact same experience with donation forms for multiple NGOs last week. We're probably going to implement some kind of IP limiting (similar to your solution) and additional plausibility checks (since it's only used by national NGOs, that's not too much of a hassle).
Agreed. When I read this it looks like a verification process for a batch of stolen credit cards to see which are still valid and which aren't. Note that the "successful" transactions may very well be charged back to you at $15 each or more so you want to refund any suspicious payments quickly to avoid going into the negative.
You basically have to catch it before the charge is even settled; either within the same business day before your settlement cutoff, or by authorizing without capturing for a day or two so you have time to review. Once the charge is settled, a refund almost never stops someone from charging back the payment -- for whatever reason, when a card is reported stolen, people and banks simply charge back everything unauthorized even if a quick review of the account would show some of the payments were already refunded. You can dispute these chargebacks by providing proof of the previous refund, but you're still out the chargeback fees, and these chargebacks still count against your account -- so they can end up getting you terminated if your CB rate is pushed too high.
We get these, have seen a case where we refunded but apparently they were going after the currency conversion amount with the chargeback. Do some providers offer longer settlements? Feels like we only get a few hours a lot of the time to void a transaction.
You have 1-3 days to capture an amount you previously authorized. You only have so little time because you're authorizing and capturing at the same time. If you need more time to review, decouple them. Some payment gateways also let you decide the settlement cutoff time, so you can set it after business hours.
It's not even about stealing money per se, it's a step in the process of credit card theft. Fraudsters often do the same thing on Amazon and iTunes - they'll make 1 dollar purchases that allow them to 'verify' the cards. In these cases the 1 dollar purchases aren't for any direct material gain.
Sort of. Theft isn't the primary motive, so I think it's inaccurate. A lot of what the fraudsters are (or will be) doing is transferring money from victims to innocent people. The real-life analogy I would use is going to a store and putting something in someone's bag without them noticing to test the store's security. That is arguably not theft, even though it's illegal.
I'd say the proper term is fraud. But I don't really like semantic arguments, so I'd say it's not a huge deal either way :)
edit: I should add, the reason I don't think it's stealing is because the money often gets returned; the illicit transfers can be reversed. When the real stealing will be going on the fraudsters will be taking lots of money and running.
edit2: I feel bad for even objecting, it's really not a big deal. 'Stealing money' is close enough to what's going on.
That is common law theft. Property taken, no consent, deprives legitimate owner of use of it. The thief gaining value from the property is not an element of the crime.
I think it's a valuable distinction you're making, and I don't think it's merely a matter of semantics. Knowing the motivations of the criminals involved is useful in building workable defences - the fact that they don't care if they get the money or not is important information, and not immediately obvious. I for one have learned something from this thread, anyway.
Agreed. When this kind of fraud started the anti-fraud industry was a bit unprepared because low amount transactions were historically not dangerous. Understanding why it was happening helped quite a bit.
Fair enough. "Some" money coming into Gittip from stolen cards is in fact going out into bank accounts, some of them belonging to "innocent people" (myself included)--but some of them not. Theft, strictly defined, is in fact taking place.
> Before selling stolen credit cards, bad guys have to verify them. This is often done with small (<$10) donations to charities or small purchases of intangible goods that are considered low risk merchants.
They also verified them on SoundCloud without any purchase. Don't know if it's still possible.
If it's the former, until you make a transaction there's no way to verify in advance whether or not a card has a still valid account attached to the backside of it.
I know you're saying "without any purchase", but maybe it was just for a vanishingly small amount.
> If it's the former, until you make a transaction there's no way to verify in advance whether or not a card has a still valid account attached to the backside of it.
It's possible, with most credit card processors, to perform a $0 authorization to confirm that a credit card number is linked to an active account. It doesn't guarantee that any charges against the card will go through -- the card may be at its limit, for instance -- but it will correctly reject numbers that are structurally valid (e.g, pass Luhn) but which don't correspond to any account.
it's just that they offer a free trial without charging any 0.01, if it passes they cancel the account and the owner never knows the card was tried out. this method can work on any 'free trial cc required' as long as it doesn't charge any test ammount.
Pull this post and talk to lawyers if you haven't already.
Depending on where you're based you'll have legal obligations that'll define what you should be doing at this point. This may well involve lawyers, your regulators and the police.
Some countries make it a criminal offence if you let a criminal know that you suspect them of money laundering or similar offences (this is known as "tipping off") so you should be very very careful about what you're disclosing both to your users and the general public.
Yeah, sorry, I wasn't clear: I'm pursuing openness because I believe in openness, not because I'm based in the US. I am glad to learn about this potential legal ramification, however.
If you tell people they're doing something illegal, they might stop before they're caught and sentenced to life in prison. And if we don't fill up the for-profit prisons, the lobbyist overlords won't be pleased.
Looks like you have just discovered chargebacks, something that just about every merchant discovers at some point.
What to do? Some options to reduce your fraud are
- outsource the problem by using an indemnified payments system (a payment processor who do their own fraud checks and don't pass on any chargebacks to you). Pros: easy. Cons: expensive and lots of valid payments will be refused.
- Use an e-wallet that usually has few/no chargebacks, eg Skrill & Neteller. Pros. Easy, not too expensive. Cons: more difficult for people to make payments as they need to create an account with the e-wallet first.
- Use services to help with your fraud detection. Eg. Iovation. Pros: you can keep it easy for your customers to make payments. Cons. a lot of work to implement (relatively speaking).
- Use bitcoin, eg bitcoin247.com. Pros. no chargebacks ever. Cons. about 0.00001% of your customers use Bitcoin.
Edit: I forgot to add:
- require 3D Secure / Verified by Visa payments. This removes the chargeback liability from the merchant in most cases and shifts it to the card owners bank. Pros. much fewer chargebacks. Customers can still deposit directly on your site using their card (apart from the 3D redirect). Cons: entering 3DS details another barrier to making payments so will reduce payments. Plus I'm not sure of the penetration of 3DS cards in the US.
Gittip's professed concern is with ethics (and possibly sustainability), not losing money from chargebacks. The author realizes he has stolen money in his bank account, and that bothers him.
I'm pretty sure that their concern is also not needing to do the dirty legwork related to these cases. The time they need to deal with these problems is away from productive development time.
This is unfortunate, but quite common. If you accept credit cards online, you're at risk. The specific kind of fraudulent behavior you see will depend on several factors (the nature of your business; whether you enable transfer from users to just yourself, or whether you push money from one user to another.)
Credit card companies will, some time later, probably notice the fraud. At that point, you'll get a chargeback: you'll have to pay back the money you charged in addition to a fixed penalty per fraudulent charge (usually $15.) Especially if you're enabling a marketplace, like gittip does, these fees can be devastating. Regardless, if chargebacks become too common, your merchant account may be suspended.
I've written some about my company's experiences with fraud, if it's of interest:
Openness about the problem is good, but I am not sure that it helps to provide that much detail about the ways you detected the fraud. That just give the attacker more information about how to circumvent your detection.
My favorite part was the 3 primary & 3 secondary factors they looked at to identify. So if the people doing this can tweak those factors it will help them out.
There were two primary factors, not three. And the point is that I'm planning to individually review and whitelist all new accounts. I'm able to adapt my heuristic in real time. ;^)
Information assymetry is probably your only advantage against credit card fraudsters, because there is no security hole, rather they are exploiting your core business flow.
I want to explore openness wrt fraud prevention, not out of a facile rejection of "security through obscurity," but as part of Gittip's identity as an open company. It's accepted doctrine that "information asymmetry is probably your only advantage." I'm asking: can we be open about fraud prevention and prevent fraud? If we can be, we should.
What are your thoughts on the value of the social graph in spotting suspicious accounts? It seems to me that we should be able to whitelist new accounts based on a review of GitHub or Twitter profiles, and perhaps for flagged accounts we "authorize without capturing," as dangrossman suggests above.
I admire your motives, but I can't offer much encouragement.
My experience is that there is no such thing as preventing fraud in the absolute sense. It's not a binary proposition—maybe general security isn't either, but it's a hell of a lot less gray than credit card fraud. So while I think it's good for general fraud prevention techniques and information to be widely disseminated, I can't in good conscience discuss specifics of techniques that I've employed because those would be easily traceable to companies I've worked for, and thus would impose an undue cost on them. A lot of people who have worked on these issues are probably in similar position where we'd be happy to go into details over a beer but not on public record.
It may be the only thing I can do to help mitigate credit card fraud but is it not the solution to preventing it. Credit card companies should be operating on the theory that people freely throw around their credit card number (because, let's face it, a lot of people do) and come up with solutions that do not rely on hiding that information.
Obscurity of process/information can definitely be a benefit to the security of a system but it should not be the solution. The system should be designed for the absolute worst case scenario where this process/information could be exposed.
I realize this can only go so far until at some point there is going to be some sort of secret that needs to be kept (i.e. physical hardware key, encryption codes, etc) where if this is cracked your system is exposed and at that point you need to have some sort of plan B to regain control and minimize damage.
This is gobbledygook. You are still thinking in terms of security, but with credit card fraud there is no security issue or system to be cracked per se. Rather it's an identity and information problem. You can impose additional steps to vet the legitimacy of a transaction, but there is nothing that can ever give you a 100% guarantee. So you have to balance your efforts at vetting the transaction against usability barriers that add friction to your core business function.
The difference between publishing your techniques (even with specific variables hidden) and often the difference between attackers being able to iteratively determine the minimum work around to get desired results and having to dedicate orders of magnitude more effort than necessary to ensure they are flying under the radar.
You're correct. I was not thinking about conning the system but actually breaking into it which is obviously a lot harder. It is a lot easier to look at ways to dupe the algorithms for determining fraud rather than breaking the system and bypassing them. Thanks for opening my eyes.
Obscurity is necessary for fraud prevention. If perpetrators know exactly what behavior gets flagged as fraud, it's easier for them to figure out ways to avoid it. If they don't know, perpetrating fraud is harder.
This is true if a system can freely be reverse-engineered by the attacker. If not, the obscurity provides an added cost to the attacker. Obscurity can actually be one of the stronger weapons in an anti-fraud solution (I used to work in anti-fraud).
Yeah like scott_s pointed out I did not consider anti-fraud. In my quick glib reaction comment I did not consider all scenarios. Thanks for pointing this out.
The maxim is that you cannot achieve security through obscurity, where security offers some guarantees (mathematical, physical, etc.). Obscurity, while offering little to no guarantees, can certainly be a quite useful part of your protection plan.
You're 100% correct Rick. I should have been more explicit in my original post. It was a quick glib comment that I should have put a bit more thought into.
There is a reason people don't go around wearing their credit card numbers and SIN on their clothes. The system to protect them should be setup to work its best in the scenario this information is completely exposed but our attempts to hide his information does add a tangible benefit.
True, it is the wrong way. Ideally there would be a better system for this kind of thing. It is the reason why physical authenticators and two phase authentication are becoming more popular.
I'm impressed at how quickly the criminal underground pivots. To identify Gittip as a potential money laundering scheme while it is relatively unknown even with Tech circles is, in a slightly discussing way, actually quite impressive.
It does make me wonder, did the bad agent happen across Gittip independently or are they active within Tech communities?
Anyone that knows about these things would see this immediately. Anything that involves transferring money from one agent to another is quickly pounced upon. That HN is a popular place for launching new startups would make this an obvious target to watch. And most people starting new money transfer systems are ignorant of the potential for fraud and laundering.
For the record, I've been waiting and watching for this to happen. It did happen sooner than I expected, however. Not sure if that means Gittip has grown faster than I expected, or our Jokers are earlier to adopt than I expected. ;^)
Makes me think that it's probably some blackhat with a few stolen credit cards, looking for a way to extract some value out of them (other than using them to pay for hosting). This isn't necessarily a large-scale operation.
I am shocked, shocked to discover criminals on boards dedicated to hacking.
Just kidding, but it is funny how outsiders might not understand why you are surprised to find criminals in the "hacking" world.
Even so, a significant proportion of the people who go to something like DefCon have done some low-level fraud with credit cards, and some have done much more than that.
Any good payment gateway should be managing the risk of stolen credit cards, but it's likely that because Gittip works with small recurrent payments instead of big upfront payments, it doesn't trigger any red alerts.
Hence also why most people don't realize that Paypal was the side effect of a company that originally was created to handle fraud. Source: http://www.amazon.com/gp/product/1430210788/
To take this to the next step, this is also why I believe Paypal is one of the very few companies that has been able to scale online payments. I'd love to see anyone challenge their ability to balance customer service with fraud prevention at scale.
Starting a new payment service, even from the point of view of a company specializing in fraud prevention is a lot harder now than it was in the past. You're basically entering an arms race that has been going on for a decade+ as a rookie or at best a semi adept. Likely your main contribution to the field before folding is target practice.
Gittip should work with a party that is already in the possession of the required knowledge or they'll be shutting down. This post raised their visibility as rookies considerably and you can expect the sharks to move in now that there is blood in the water.
I am a strong supporter of Gittip. I think it is an important funding model to make available, across a variety of disciplines. I hope there are some people around with experience identifying money laundering patterns, who can keep Chad from having to reinvent the wheel on this.
For what it's worth, a little bit of fraud is a good thing. It means people are using your system and it's growing. Too much fraud and people will lose confidence and your payment processors will punish you. Too little fraud and your system is probably too complicated to be useful to anyone, including fraudsters.
It's impossible to tell, with certainty, if a credit card is being used by its rightful owner or someone else. That's not something anyone anywhere in the payment processing industry guarantees. In terms of who can do the best at predicting the likelihood a transaction is fraudulent or not, it's definitely the merchant/website, not their processor. He has much more information available to him (IP address, github account, etc) than Balanced has.
There's a field (meta) on the Debit resource that allows a marketplace to pass fraud signals like IP address and shipping address. Gittip is facing this issue much earlier than another marketplace in comparison to their volume. Otherwise, we don't ask a marketplaces to pass in more information until they've grown more. The problem in being restrictive too early is that you can hinder a marketplace from growing by having false positives.
In the end, I think we (Balanced) should have done a better job here, and we'll work hard to do so in the future.
My GF just found out a few hours ago that she was the victim of a similar scheme. Someone used her Amazon account (which has her credit card info) to donate to a Kickstarter account. Unfortunately, she has no way of finding out which Kickstarter account. Luckily, her credit card company took care of everything without a hassle. She also spoke with Amazon customer service, and they "were completely useless and almost hung up because they didn't know what Kickstarter was."
The problem with hiding heuristics is that false positives get squished. I want to avoid the horror stories we hear about people getting their Google account shut off or their PayPal funds withheld.
Another alternative would be freezing money transfers for 30 days or so. Or use a payment processor that is used to deal with high risk websites (for example, CCBill).
> The uncomfortable truth is that Gittip, Balanced, and our legitimate users are financially incentivized to turn a blind eye to laundering, because we have benefitted and are benefitting from it.
That's only true until you start getting chargebacks.
> That's only true until you start getting chargebacks.
There are compliance ramifications of permitting money laundering, but collusion isn't always money laundering. Here's a few different scenarios:
1. Legitimate money laundering where someone is trying to obfuscate the origin of the money for some illicit reason. The ramifications of permitting or not having strong enough systems to prevent money laundering results in being shutdown. That's a bigger incentive than financial loss
2. Fraud where someone is trying to get cash off of someone else's card. This is the number one form of fraud on a marketplace and, by far, the hardest to catch. This is where the incentive is financial due to chargebacks
3. Cash advance where a marketplace has set their fees low (sometimes even lower than the fees Balanced charges) and someone is incentivized to get money off their card or simply get miles/points. Venmo and a lot of similar services experienced would get targeted by this form of collusion when they didn't charge any fees. This should be prevented due to card network (Amex, Visa, MC, Discover) policies, but they generally won't result in a chargeback
This is why banks frown upon offering CC merchants to "marketplaces" - anyone who is not charging cards for their own business, but allows one user to give money to another.
You didn't get money laundering, but if your volumes would be larger, you would get also money launderers.
It's both. Gittip is 'laundering' the money, so that it's clean on the other side. It's not the greatest money laundering scheme as the launderer is unwitting, and therefore can 'flip' exposing the source of the ill-gotten gains.
Negative. The layering stage of ML does indeed lead to "cleaning" funds, but Gittip isn't doing that. ML means giving money a clean-slate with virtually no history. Everything from casinos, offshore entities, to wiring funds through FATF blacklisted countries will be closer to what ML actually is.
Any system handling funds should be approached from the angle of minimizing the potential for fraud. If you don't do that right from day one there will be a lot of hard lessons which are more than likely to kill your company. Please team up with a company that has the experience to deal with this, balanced (which should have been your first gatekeeper here) dropped the ball in a terrible way, their anti-fraud measures should have definitely tripped over this so clearly they're not in control of the situation. From your posting and the comments here it is clear that you have the right general idea but you lack the relevant experience and tools.
As it turns out, the CEO of BalancedPayments is (there is just no nice way to put this) an unethical bag of scum. He recently went on some kind of insane power trip, completely disregarding the needs of his customers, putting me on unpaid leave for ... reporting an incident of fraud to a bank. I reported an incident exactly like the one Chad discusses here, but the dollar amount stolen was much higher, and the fraudster a repeat offender.
Anyway, after that last meeting where he was sneering and enjoying way too much the power trip of getting to "fire" somebody, I can confidently exhort that Balanced should not be trusted.
It's important that any company a marketplace entrusts its financial data with is an ethical one. So, yeah, looks like I'm on the job market; ping me : http://lnkd.in/NuBGDY