> “We reported this in February 2019 to PayPal via HackerOne,” they say. “After an initial rejection and several discussions, PayPal paid a bug bounty of $4,400.” The pair have not heard from PayPal, they say, since April 2019. But this week “tried and could still use the virtual credit card for online payments.” That means, they told me, “the bug has not been fixed.”
> But in terms of the Fenske and Mayer disclosure, the researchers told me that this is not fixed, even after PayPal’s “mitigation” statement.
If Paypal has known about it for a year and it still isn't fixed, then it means that either 1. Paypal didn't understand the bug report and "fixed" something else 2. Paypal understood the bug report, didn't fix it, and is trying to save face. Either one of those sounds pretty bad for their security policy...
As for paypal's security policy, note that they have a maximum password length of 24 characters, and routinely send people e-mails with a big 'log-in' link.
These are both bad practice. The password length limit reduces the quality of passwords, and suggests plain-text storage of passwords. The sending of log-in links makes people much easier to phish, since people are used to clicking on a link in e-mail and then entering their credentials on the site.
Recently, I received an E-Mail which looked a lot like a phishing attempt. It contained a link to sign in to "paypal.com", but when hovering over the link, it was revealed to be something like "https://epl.paypal-communication.com/T/ve3648d90e0f976ec10e4.... Really stupid idea to make users believe that "https://random.paypal-suffix.com" might be legit. I wonder why domains like "paypal-comunication.com" are not registered for nefarious purposes yet.
Important to note that this is a department that manages tens to hundreds of thousands in loans per user, asked users to recreate an account multiple times, on a variety of domains, by providing critical personal info (including SIN), and sent threatening notices demanding payment for nebulous charges that later resolved themselves.
Sure, but "some size limit" can be as big as 500 or 1000 bytes. "Choking up the connection" won't be for very long at all. Even on a 2400 baud modem, you're talking only a few seconds to transmit a kilobyte of password. On modern cell networks or especially broadband connections, you pretty much won't choke the connection at all with a 1000-character password.
To add to that, the login page (which contains nothing but two fields for email and password) weighs in at 1.5 MB, so bandwidth clearly is not an issue for PayPal.
I agree, which was I was only criticizing the point against "there should not be a limit at all" -- I don't know what you're disagreeing with that I said.
One reason why one might have a password length limit (especially such a low one) is because it is stored in a fixed size 'string' field in a database.
This used to be a common configuration of authentication mechanisms where the password was stored plaintext. Now, I don't think paypal is _actually_ storing passwords in plaintext.
However, it is the first reason that comes to mind when this kind of limitation exists.
My most generous explanation of this scheme is that, at some point, paypal used plaintext as the backend for authentication. Now when they moved to a better scheme for the backend they never updated the length limit. Then this limit of 24 slowly invaded all code on the front-end of auth and changing it is seen as to big an issue. I'd expect some reasons like 'longer passwords are harder to remember' and 'network performance' are probably used internally as rationalizations for why no one starts trying to fix this.
Alternatively, they could really believe in a 'long passwords are harder to remember' or 'long passwords would induce a lot of performance overhead'. However, as far as I know, there are no reasons that fall anywhere near 'best practice' that would support a password length limit of 24.
No because the length of that field is static. Hashing algorithms (at least the ones you should use) will create a static length hash no matter the size of the password.
The only time length matters is if the hashing algorithm has a limit (which is usually pretty high), or if you're storing these values encrypted/plain text.
Hence, the presence of a password limit suggests the used of a fixed length field in a DB that stores values in plain-text.
The passwords being stored encrypted with a fixed length cyphertext would make 24 a weird number. That would suggest something like AES-192. Without storing the IV in the same field. Moreover, encrypted passwords are worse than hashed passwords. Because it means you need to secure the encryption key. Whereas with a salted hash, there is only brute force for recovering plain-text passwords.
Or because you don't want to risk users forgetting very long passwords. (Everyone should be using a password manager, but sadly that isn't the case.) That's generally the reason that limit is set.
That certainly flies in the face of most recent recommendation. Recent NIST guidelines say not to do this, and NIST isn't really known for being adventureous.
The current best practice is for people to use passphrases rather than complicated passwords. For this, a password length limit of 24 is simply not enough.
I'm no advocate of it; just explaining what's very likely their rationale, as opposed to them storing passwords in plaintext. Plaintext password storage still happens, but you wouldn't see that at a company like Paypal. (Plaintext credentials may be unintentionally present elsewhere, like debug logs, but a company like that isn't going to be so incompetent as to store them in plaintext in the database.)
Can you show me a larger payment-focused company that's been revealed to be storing passwords in plaintext within the past 5 years? Not saying there's not a ton of incompetence, but that's a very specific level of incompetence.
I'm just saying a company like Paypal which is both massive and dealing with something extremely sensitive (money) isn't going to make such a ridiculous mistake. I'm definitely no fan of theirs; I'm just saying it's silly for the initial poster to speculate the password length restriction is there because they're storing them in plaintext.
Plaintext password storage definitely still is common, unfortunately, but you're not going to see it at a place like Paypal or Bank of America or Stripe in 2020. Or even 2010.
I work in the infosec industry, so I've definitely seen some crazy and incompetent shit. But I still don't think Paypal or Stripe would store plaintext credentials.
I’ve been telling friends and family, for at least 10 years, maybe more, the only safe way to go to PayPal, is to type the address into the browser yourself, starting with HTTPS.
While that includes Chrome, Safari, Firefox, IE, and Edge that’s not every browser and it’s a really bad habit to get into. Especially as the preload lists are yet another attack surface.
PayPal needs to seriously reevaluate how they want to approach the vulnerabilities. Why have a bounty program if you are going to act hostile towards the white hat community or even ignore their reports?
The two stories are unrelated, though the reporter cites the former. From the vulnerabilities disclosed in that report, it seems pretty unlikely that yesterday's stories caused a rash of thefts; they were all pretty low-severity.
Note that here, Paypal paid a substantial bounty a year ago.
“We reported this in February 2019 to PayPal via HackerOne,” they say. “After an initial rejection and several discussions, PayPal paid a bug bounty of $4,400.” The pair have not heard from PayPal, they say, since April 2019. But this week “tried and could still use the virtual credit card for online payments.” That means, they told me, “the bug has not been fixed.”
To reiterate the OP, what is the point of a bug bounty program that ignores or fails to address reported issues?
> The two stories are unrelated
They are related in the sense that both stories show a failure to respond to reported issues.
> Note that here, Paypal paid a substantial bounty a year ago.
They paid but didn’t fix the issue? This is not taking account security serious at all.
At best, PayPal has a critical flaw in their bug bounty program.
> This seems curiously confident given that just about every single one of your many comments on the other story was inaccurate or a misinterpretation.
So you don’t think that sending a bug bounty reward a year ago to a security researcher who exposed a flaw, that is still being exploited to take money from people, is a critical flaw in the program?
Do I think it's possible PayPal had an incomplete fix or had a regression or an organizational screwup of some kind? Absolutely, that is possible.
Do I think your 'motivated googling' approach to analyzing either story is likely to produce worthwhile insight? Not really. We've already seen it be remarkably inaccurate.
> Note that here, Paypal paid a substantial bounty a year ago.
$4400 is not a substantial bounty for a bug of this severity, compared to the value on the black market. Like, the article even cites that attackers have extracted over $1000 from some individuals, multiply by thousands of users.
The black market value of this exploit is certainly into the hundreds of thousands.
You'll excuse me if I don't take this very seriously given that on previous threads people have --- not making this up --- made cases for logout CSRFs having high value on the black market. After all: the competitors to the vulnerable service could have used logout CSRFs to drive customers away!
Vulnerabilities are worth money in markets when they fit into pre-existing business/operational models. That's why clientside RCE in popular clients is valuable: multiple competing buyers have whole operational frameworks where new RCEs are drop-in compatible. Nobody speculatively builds new business models around the prospect of a random serverside vulnerability.
Maybe a vulnerability like this has value --- we don't know what it is, or how much interaction is required, or how quickly it could have been killed --- if it directly produces cash every time it's applied. But that's still a maybe for a serverside vulnerability with a half-life of epsilon.
Meanwhile: $4400 is a strong bounty for a serverside logic vulnerability. Serious vulnerabilities like stored XSS have bounty values in the hundreds of dollars despite the fact that people on HN seem to think they're worth 6 figures on some hypothetical black market.
I have no opinion about how Paypal handled this vulnerability after paying up for it; I'm exclusively interested in how this story intersects with yesterday's Paypal thread, which was a shitshow.
The business model already exists: order videogame consoles or luxury purses or some other high-value good that is easily fungible with cash (Craigslist) and then either porch pirate the package when it arrives, or if you're brazen enough just order it right to yourself.
This is something that is routinely done already with credit card fraud. This is how thieves "cash out" the cards they steal with skimmers at gas pumps and stores.
The angle here is that they no longer need to steal your digits with a skimmer, they can just use your contactless payment wallet, because Paypal didn't lock them down properly. Or so the researchers allege (they don't give the exact details of the vuln for obvious reasons).
That's like saying the existing business model is "crime". What I'm talking about is all the support code and processes that go into operationalizing a vulnerability. Again: it depends on what the vulnerability is, and we don't know, but if it's an elaborate serverside vulnerability that requires user interaction specific to this vulnerability, I'm comfortable out on the limb that says nobody is bidding against anyone to buy this on any black market.
You're also only responding to a fraction of my argument. Even for clientside RCE, alternate market buyers don't pay full freight in a lump sum: they tranche payments because they know vendors will eventually kill the vulnerability and nobody is sure how long that will take. Here, you have a vulnerability where you'd more or less have to get paid royalties from direct fraud, because, again, Paypal can presumably kill the bug instantly.
If you have more specific details on the vulnerability that will enable you to make a clearer case for how this could drop into a system of repeated profitable attacks, provide them. I'm interested in hearing them.
Otherwise, my default response to people saying "this bounty is too cheap because the black market would pay 10x for it" is "yeah, sure, and for logout CSRFs too".
> PayPal told me that “the security of customer accounts is a top priority for the company.”
I wish journalists would ridicule this corporate bullshit lingo instead of just relaying it. I'm fairly certain that anyone that ever had contact with PayPal's (or Amazon's, or probably any other large corporation's) customer service with issues regarding security can attest that it's absolutely not one of their top priorities.
They haven't even bothered to make their official emails not look like phishing attempts. They don't care about security.
It's called the "right of reply". It's courtesy to reach out to a company and include their response without critique about whether it's "corporate bullshit".
I agree that reaching out to whomever's being criticised is a courtesy and, sometimes, even legally required. But I don't think it's right to not critique. When a company blatantly uses doublespeak, that should absolutely be critiqued.
It's not the journalist's duty to ridicule anyone. Jon Stewart's "The Daily Show" was very entertaining, and even somewhat informative, but I think he would be one of the first to say that it wasn't journalism. I don't like corporate bullshit lingo either but when that's what they say as their formal statement to the press then relaying it is the journalistic duty.
It's one thing if the company is outright lying, but PR blather doesn't really count (for all that it's pointless and annoying.) In any event, yeah, if they want to editorialize, that's fine but it's done in a separate "Editorial" section. ( https://en.wikipedia.org/wiki/Editorial )
You wouldn't need to take Jon Stewart's approach though. I'd be fine if the ridiculing comes from another party that the journalist gives room to. Ask a consumer advocate what they think of PayPal's statement, or ask an infosec professional.
Letting the PR speech stand by itself without large red arrows pointing at the absurdity gives it way too much power imho.
> The right of reply or right of correction generally means the right to defend oneself against public criticism in the same venue where it was published. In some countries, such as Brazil, it is a legal or even constitutional right. In other countries, it is not a legal right as such, but a right which certain media outlets and publications choose to grant to people who have been severely criticised by them, as a matter of editorial policy.
You might not like it but it is a thing.
And if they choose to defend themselves with a load of PR doublespeak BS, well, that's also news, eh?
If the reporter adds "...which is obviously bullshit." at the end that's the difference between journalism and a SNL sketch. ( https://www.youtube.com/watch?v=i9qblOghuKk )
> They haven't even bothered to make their official emails not look like phishing attempts. They don't care about security.
Man, now that you mention about it, I thought the same thing every time I see an email from PayPal and just disregarded the thought because it's so common from websites and nobody cares. It's _literally_ the business model of some of the top email marketing companies.
In either case, now that you mention it, I don't think it was mentioned in yesterday's six vulns disclosed or in today's.
Regarding security at PayPal, I've got a PayPal donation not long ago to my email address in the form of {@example.com. This email was not attached to my PayPal account, so I tried to add it to claim the payment, but client-side validation would reject it because of the funky { alias.
I've disabled the client-side check using the browser's developer tools and my email was accepted by the server upon submission, so I could finally claim my 5 euros :P.
All of this was preceded by me contacting support about adding my email address. They couldn't help me and told me to contact the sender, which would have been impossible, since it was a donation, and the only thing I had was a PayPal notification about a pending payment to that email address.
Of course the server should have accepted the email anyway, because it was valid, the issue just highlights a faulty development process at PayPal that allows server-side validation to be more permissive than client-side validation.
As a power user of Google Pay in conjunction with PayPal (in Germany) should I be worried now and remove - as recommended - my PayPal account from Google Pay? A lot of people around me also use it the same way as I do and no one heard of any such incident yet. Well, now that I told them, of course everyone heard of it at least ...
What are those "multiple reports"? I see the source is golem.de (don't get me started on that one) and "multiple reports" can just mean that less than half a dozen people got busted on their Google accounts for not using proper 2FA in that context.
Also the article states that Google Pay provides a virtual credit card when used with PayPal. How? All I saw up until now was virtual debit cards.
> Also the article states that Google Pay provides a virtual credit card when used with PayPal. How? All I saw up until now was virtual debit cards.
Naming confusion, I think. To some people (mostly Americans), credit and debit cards are the same thing - just plastic payment cards. Some Europeans think Visa/Mastercard can only be credit cards (they can be either credit or debit).
To me, the difference is that credit cards use the bank's/issuer's balance (that you can pay back later, all at once or spread out in smaller monthly payments), while debit cards use your bank account balance directly. I think that's the proper differentiation.
For added confusion, credit and debit cards have different processing networks, with the debit networks having much lower fees. And in the US at least, virtually all debit cards can also be processed as credit if the store doesn't accept debit directly (eg online payments). And while in Europe all cards have PINs, in the US only debit cards have PINs, and running then as credit allows you to bypass the PIN requirement.
Just as a PSA, it wasn't until I looked at one of these articles in the last few days about PayPal that a screenshot showing how to enable 2FA demonstrated that TOTP-based authenticator apps are now allowed. For the longest time, PayPal was only allowing 2FA SMS after they chucked their old physical security keys.
Anyone who's been stuck on SMS may wish to login and switch over to TOTP.
I tried when they introduced that and gave up on it again: There is no way to mark a device as trusted, and I'm certainly not opening my 2FA app for every single login/payment.
Also, this is 2020, where is WebAuthN? That would at least make the constant 2FA a bit more bearable.
Bitwarden and probably other password managers will put the OTP in your clipboard after filling your credentials in so you don’t have to open anything to get the TOTP.
Every single login asks for TOTP, and sessions expire very quickly. Even the mobile app asks every time, even if you use some saved/authorized login method like fingerprint login.
For someone who uses PayPal for most online payments this can be extremely tedious.
> For someone who uses PayPal for most online payments this can be extremely tedious.
To rephrase that: the quantity can range from "almost every online payment" to "every online payment". If, like many people, you try to use PayPal for most payments to avoid credit card info leakage, that means you need to answer a TOTP challenge on every payment.
So NFC reading of embedded card details is always on, regardless of whether you are in "payments" mode or have the app open? Is that a PayPal flaw, or is it an Android/NFC/Google Payments flaw?
> So NFC reading of embedded card details is always on
Apple have a similar option BUT they restrict its use to certain payment terminals (for example it is only supposed by TfL in the UK) and can be disabled if you want. Dunno if the data it exposes (in a physical read attack) could be used for other payments.
> With Express Transit mode enabled, you don't have to validate with Face ID, Touch ID or your passcode when you pay for rides with Apple Pay on your iPhone and Apple Watch. And you don't need to wake or unlock your device, or open an app.
It's a Google Pay feature, but it only works up to the "no CVM" limit, which varies by country (but is usually something like $20-$50 or the local currency equivalent).
PayPal is really bad with security. A friend of mine reported a CSRF attack that an attacker could withdraw all the money out of a vemmo account (was acquired by PayPal) if the victim visited the attackers website. It took them several weeks to fix, and friend didn't receive any bug bounty.
> But in terms of the Fenske and Mayer disclosure, the researchers told me that this is not fixed, even after PayPal’s “mitigation” statement.
If Paypal has known about it for a year and it still isn't fixed, then it means that either 1. Paypal didn't understand the bug report and "fixed" something else 2. Paypal understood the bug report, didn't fix it, and is trying to save face. Either one of those sounds pretty bad for their security policy...