Hacker News new | past | comments | ask | show | jobs | submit login
“We found PayPal vulnerabilities and PayPal punished us for it” (cybernews.com)
980 points by teslademigod1 on Feb 24, 2020 | hide | past | favorite | 328 comments



HackerOne appears to be completely broken and I wouldn't recommend it to anyone.

Disagreements are to be expected on a bug bounty platform, but these days they just stop responding altogether and don't pay. It borders on outright fraud.

I've been trying to report a Squid RCE (CVE-2020-8450) since October. The Squid maintainers seemed unprepared for dealing with the report as they kept being unresponsive and it took 2 months to merge my patch. Maybe they're volunteers, so I can't blame them. Reported it to the bug bounty [1] which promises high rewards on January 20th and apart from triaging it, there has been radio silence since despite having invoked HackerOne mediation. I have more Squid memory bugs and I'd rather rm -rf them than go through this process again.

HackerOne used to be decent but this appears to be a structural problem now [2].

[1] https://hackerone.com/ibb-squid-cache [2] https://twitter.com/DevinStokes/status/1228014268567547905


I worked as a contractor for a company that's a household name in the US. I am now convinced that HackerOne only exists for CISOs to say "look, I'm doing something" during the 2-3 years they stay at a company.

The cybersecurity team had a backlog of roughly 30 critical issues discovered internally before starting HackerOne. We were unable to fix those issues, or the ones reported to us, because we had no visibility into source code, there were 12 different development teams, most of them outsourced, and all the project managers were interested in was covering their ass.

The HackerOne deployment was invite-only, but the few hackers in it did fantastic work. I kept being told to find excuses to reduce the amount we'd pay for the critical issues they'd find and we'd fail to fix. At least we triaged faster than Paypal.


Ohh your very right. The sales team is very focused on "selling" to the CISO (rightly so I suppose). I was part of a team that got the big sales pitch.

Little technical details, high on "let us handle this for you, we know hackers / Well throw a big Defcon party for anyone you want."


Hi, if you’re at all interested in discussing this (including if you’d prefer your name not be disclosed) please email David.morris@fortune.com. Thanks.


HackerOne’s community team also seems trained to gaslight ethical reporters who try to follow responsible disclosure practices.

I submitted a vulnerability to a vendor on H1 along with a typical “I plan on publicly disclosing this vulnerability on X date” note, and started getting emails directly from H1 telling me that this undermined vendors’ confidence in the platform and that doing what I was doing might make it so I can’t use HackerOne any more. In the same correspondence they said that my approach made sense—but they continued to threaten that “it would be a shame if you weren’t able to participate any more”.

In my case, the vendor verified the vulnerability quickly, but kept dodging my follow-ups by replying without answering my questions. When the vendor refused to assign a CVE after I asked four times, I contacted the HackerOne CNA directly to get an assignment. They replied within 48 hours asking if there was any public information already, I said no and that I was planning on disclosing on X date, and then they just stopped replying for a month until after the deadline passed.

At a glance, H1’s disclosure guideline appears fairly reasonable: 30 days by default, an upper bound of 180 days. In actuality, those times only start once a vendor closes a ticket, and can be extended indefinitely. Reporters aren’t allowed to speak publicly about anything they send to the platform until the ticket is closed and the vendor agrees to allow it, even in the public programs.

As far as I can tell, HackerOne’s primary purpose now is to act as a shield for bad vendors to hide their security defects from the public by using network effects to bully reporters into keeping quiet. The community team claim this isn’t what they’re doing and that they always ask “why should this be private?”, but their marketing material to vendors tells a different story[0], their actions with me tell a different story, and the vendor I reported to had over 100 closed reports, going back years, and none of them were publicly disclosed.

Unless you must pay your bills with security bounties, or don’t actually care and just want to dump a report and forget about it, I unequivocally recommend against using HackerOne to report a vulnerability.

[0] https://www.hackerone.com/sites/default/files/2018-11/The%20... page 12: “even with a public program, bug reports can remain private and redacted, disclosure timeframes are up to you”


I 100% agree with this.

I reported to one program, which ignored the report and effectively stalled until the startup had pivoted to a different idea. HackerOne didn't remove them from the platform and did not make it possible for me to publish the report through their platform (and publishing it otherwise would have likely violated some ToS).

I reported a second issue to Cloudflare. It was acknowledged as a known issue within less than an hour, but still not fixed months later, and again I was unable to publish it.

Despite waiting for months and requesting disclosure repeatedly, none of these reports are disclosed yet.

In the future, if I find a vulnerability and the only reporting path the company provides is HackerOne, I will apply full disclosure instead.


>HackerOne’s community team also seems trained to gaslight ethical reporters who try to follow responsible disclosure practices.

I would say to step back and even question the concept of 'responsible' disclosure. For starters, even the very name seems to be manipulative by setting the tone of the conversation in a way that, in most other settings, doesn't pass the smell test.

It seems like a short term optimization with longer term costs. While at the moment release the vulnerability into the wild will likely be followed by bad actors exploiting it, by sitting on it until the company fixes it we create an environment where companies are given a grace period if vulnerabilities are found. This is in turn factored into their decisions making when it comes to how prioritized security is.


The name "Responsible Disclosure" is in fact Orwellian and was designed that way, and people should avoid using it (or, really, the word "responsible" in discussions like this). The preferred term is "Coordinated Disclosure".


Help me out here, in what way is it Orwellian?

I always assumed the responsible part had multiple non-conflicting meanings, that 1) The researcher would not disclose it to the public until the vendor has a reasonable amount of time to fix it and 2) the vendor is assumed to want to do the right and responsible thing in fixing the flaw.


It presumes a definition of "responsible" that suits the interests of vendors and treats the safety of end-users as an externality, in such a way that anyone operating in good faith and responding to different legitimate incentives is by definition "not" disclosing "responsibly". It's a linguistic ploy, and not one that should be dignified.

In 2020, non-ironic use of the term "responsible disclosure" has become somewhat of a "tell" that the person speaking isn't super connected to vulnerability research.


You’re right that I am a software engineer, not a vulnerability researcher. I keep up with vulnerability research only insofar that I need to be aware of new classes of exploit so that I can write secure code (and, hey, it can be interesting!).

So, what is the correct term that is supposed to be applied to the approach of disclosing to a vendor first, giving them a hard deadline, and then doing a public disclosure? As far as I know, it’s not “coordinated disclosure”, since “coordinated disclosure” normally means the vendor controls the timeline.


It is in fact "coordinated disclosure".


Even when the disclosure is not actually coordinated, in the common sense of the word, because the vendor doesn’t agree to the deadline and/or isn’t given any option to pick a longer deadline?

Edit: The Google Project Zero FAQ[0] explicitly states its approach is not coordinated disclosure:

> Prior to Project Zero our researchers had tried a number of different disclosure policies, such as coordinated vulnerability disclosure. Coordinated vulnerability disclosure is premised on the idea that any public disclosure prior to a fix being released unnecessarily exposes users to malicious attacks, and so the vendor should always set the time frame for disclosure.

It seems to me that if “responsible disclosure” is problematic for the reasons you’ve mentioned, “coordinated disclosure” is too. Actually, it’s maybe even worse, since “the researcher refused to coordinate with us on the deadline” is objectively true, whereas “the researcher didn’t disclose this vulnerability responsibly” is totally subjective.

As I said in https://news.ycombinator.com/item?id=22407821 I don’t like the phrase “responsible disclosure”, especially given its history, but “coordinated disclosure” doesn’t seem to do any better at being a phrase that can’t be weaponised against researchers. It also has the downside of meaning different things to different people within infosec which makes it unreasonably hard to communicate effectively and concisely.

So, you know, anyone reading this with high stature in infosec, please coin something unambiguously unique (“time-gated disclosure”?) so less time can be spent talking about semantics and more time can be spent on how to improve software security for everyone. :-)

[0] https://googleprojectzero.blogspot.com/p/vulnerability-discl...


I don't know what else to tell you. "Responsible Disclosure" was literally a coercive marketing strategy cooked up by vendors; it isn't a term we arrived at organically. Don't use that term. Use any other term you like, but the convention in the field is "coordinated disclosure".


To counter Orwellian term, consider “informed disclosure”?

The vendor is informed before disclosure. The security researcher is an informed expert disclosing to end users. Good behavior vendors can further inform these researchers on challenges driving vendor need for alternative timing.

On the timing dimension, consider “cadenced disclosure”?

Less about consensus, more about the beats.


I'm really not interested in trying to coin new terms.


"Coordinated" would suggest the bug discoverer and the vendor coordinate on a time. (As for whether this is what actually happens in practice, no idea.)


That sort of makes sense, but what are examples of other legitimate incentives that might compel a researcher to disclose the presence of a vulnerability before the vendor has a fix?


For instance, the vulnerability is being actively exploited already, or is trivial to find.

In reality, it's not incumbent on researchers to wait for patches at all. You can straightforwardly argue that you're obliged to give users enough of a head start to stop using the product if the risk is intolerable to them, and then disclose ready-or-not.


Google Project Zero have been unequivocal about how their forced disclosures have caused vendors to release security patches earlier and more frequently[0], which is a win for everybody.

Otherwise, research suggests that the chances of a vulnerability being independently rediscovered within three months may be as high as 1 in 5 for certain types of defects[1]. This means that even if you don’t know a particular vulnerability is being actively exploited, you’ll eventually find one that’s being quietly exploited by someone. Since you don’t know which one it’ll be, early disclosure at least gives end users the opportunity to apply mitigations and hopefully burns a 0-day being used by an internet bad guy.

[0] https://googleprojectzero.blogspot.com/p/vulnerability-discl... - “Why are disclosure deadlines necessary?”

[1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2928758


I don’t like the name either but I don’t have a megaphone big enough to coin a new term of art and it is the only phrase I know that is used for “disclose to the vendor first, then after a fixed amount of time, full disclosure”.

There’s no question that the name is manipulative since it was first coined to refer to what’s now called “coordinated disclosure”[0], in an essay which referred to full disclosure as “information anarchy”[1].

In this case with HackerOne, the issue is not with terminology, nor is it with time-gated full disclosure versus immediate full disclosure. Rather, the issue is that HackerOne go out of their way to serve the interests of vendors who don’t want to fix defects quickly, at the expense of reporters, end users, and software security in general.

HackerOne could take the approach of saying “we are a neutral platform for connecting researchers and vendors, it is not within our purview to try to stop reporters from disclosing vulnerabilities if they feel it’s necessary”, but this is not how they operate today, and people should know this.

[0] https://en.wikipedia.org/wiki/Full_disclosure_(computer_secu...

[1] https://web.archive.org/web/20011109045330/http://www.micros...


> ... HackerOne go out of their way to serve the interests of vendors ...

I don't mean to sound so dismissive but... of course they do!

Who's paying HackerOne? The vendors!

Who is gonna be their first priority? The vendors that are paying them!

They certainly aren't going to do anything to harm that relationship that is keeping them in business.


Hi, if you’re at all interested in discussing this (including if you’d prefer your name not be disclosed) please email David.morris@fortune.com. Thanks.


> HackerOne appears to be completely broken and I wouldn't recommend it to anyone.

Completely disagree with this.

I launched a HackerOne program for my company last month (for free, not using their “managed” service).

Of the many reports people submitted, we triaged 30-40 valid reports (most very minor, one or two moderate). We paid out a few thousand dollars in rewards.

At the same time, we also did a more traditional 2-week penetration test with Cobalt (https://cobalt.io/) that cost over $10,000, and HackerOne was the clear winner when it came to the number of high quality security reports worth fixing. And H1 was 2-3x cheaper after paying out the bounties.

I’m sure HackerOne isn’t great for all companies, but just posting this to refute the blanket statement that HackerOne is “completely broken” across the board.


You are not disagreeing. The grandparent is saying that H1 isn't great for researchers.

For companies, it might as well be very good, both for honest ones, and ones just looking to "cover their ass".


This is not funny, many of us live in countries where we need that money. We're underpaid but we try. I'm glad that we have high quality researchers in this platforms but punishing us goes too far.


> And H1 was 2-3x cheaper after paying out the bounties.

Perpetuating a system wherein security researchers are massively underpaid for their services because of a terrible abusive platform doesn't seem like a very nice way to do business.


The platform has nothing to do with the rates; plenty of companies run bounty programs without H1, and there is a general standard range for findings across all platforms.

None of these findings appear to have been worth much of anything.


Surprise, you don't need hacker one to get reports. sendbugshere@company.com solves the same issue without introducing a new trusted party.


I've never liked these rent-seeking bugbounty platforms which are inserting themselves as middle-men and mediators, but then take away the real value that comes from building direct client relationships.

it's ok for people who start out and only want to work on vulns and not bother with "sales" (building long term client relationships). severely limiting though in the long run!

much better to spend time on pitching your service directly and build a name for yourself this way. most customers I had always came back and rewarded me with more work. on those bounty platorms however you're constantly competing with drive-by pen-testers who lower your price and you have no say in the whole negotiation and bargaining phase. your previous reputation also tends to stay locked into these platforms.

a better long term approach is to build connections, set up a ltd (LLC) and make sure you have a good lawyer who can advise you (not just when things go down). ideally build a collective with other like minded (e.g. like a consulting or law practice where you don't always have to share clients but you can if you want to complement each others skills).

this is imo the best way to escape the "scope-prison" and the best way to learn about clients additional (and actual) weak points (points that they haven't themselves even thought about).

does anyone here do it this way or with a similar approach?


We've had bug bounty programs in the past. The biggest time sink is filtering the bullshit. You need someone with not amateur levels of technical chops to do it (which is someone who will have less time to do other things).

I've been that person before as both the 'do it yourself' bug bounty program as well as the 'filtered by hacker one' approach and I'll take the latter every time.

Outsourcing to Hacker One helps cut down the bullshit is where their value add is (and to a lesser extent the reputation system, however if someone is reporting on Hacker One I'll give them the benefit of the doubt). Anything else on top of that is just upsell.


> I've never liked these rent-seeking bugbounty platforms which are inserting themselves as middle-men and mediators, but then take away the real value that comes from building direct client relationships.

They can add value for companies that don't have a reputation and want to have their security problems discovered. But they have to follow through on behalf of researchers and threaten to remove companies that don't pay bounties and/or don't investigate and remediate issues.


H1 doesn't really prevent you from building direct client relationships. People are just bad at building direct client relationships.


Squid is vastly under-equipped to deal with the security hygiene needed for a project this important.

That's the tragedy of the open source world : mission critical for everyone, but no actor willing to maintain it properly. It's Heartbleed all over again.


I keep thinking we need some sort of new license for open source that limits which entities can use the software based on their net worth or the networth of their shareholders. That way large companies like Google can automatically fund these long tail of projects without burdening casual hackers or startups with unnecessary costs.


The Business Source License allows you to restrict use of the open source product based on thresholds of your choosing. Entities using it above the threshold need to pay for a license.

After a certain amount of time passes, the software reverts to an open source license of the author’s choosing:

https://mariadb.com/bsl-faq-adopting/

Example: use restricted to non-production use, reverts to GPLv2 in four years:

https://mariadb.com/bsl11/

The clock protects users from lazy / out of business vendors. If few enough improvements have been made recently for it to make sense, the customers simply fork an old version of the project, and deploy that instead of paying for ongoing “development.”

(The business source license is not “open source”, but I think it is close enough to be a good compromise in practice)


> I keep thinking we need some sort of new license for open source that limits which entities can use the software based on their net worth or the networth of their shareholders.

That might be a new license, but it is by definition not open source. And, no, companies like Google won't “automatically” buy commercial software with that style of license; from their perspective it's worse than regular commercial software since it has all the downsides of traditional commercial software plus gives a competitive advantage to upstart competitors.

EDIT: How about instead a “new” license that, if you feel the software isn't maintained adequately for the needs of your organization, allows you to hire whoever you want to maintain it to your requirements, instead of impotently raging that other people aren't supporting it?


Hey, try not insulting people that are trying to have a reasonable conversation.

> EDIT: How about instead a “new” license that, if you feel the software isn't maintained adequately for the needs of your organization, allows you to hire whoever you want to maintain it to your requirements, instead of impotently raging that other people aren't supporting it?

It makes sense for the people that are getting the most profit from a piece of software to be the ones paying for basic maintenance/cleanup/improvements.

If you want customizations or new features, that's when it makes the most sense to 100% self-fund.

> it has all the downsides of traditional commercial software plus gives a competitive advantage to upstart competitors

On average, I'd expect it to still be a lot cheaper than commercial closed-source software.

And what exactly do you mean by competitive advantage here?


> It makes sense for the people that are getting the most profit from a piece of software to be the ones paying for basic maintenance/cleanup/improvements.

It often does make sense for them, and you can see lots of cases where this is is done with actual open source software without resorting to a commercial license that discriminates on scale. If it doesn't make sense for the people you want to pay, making a free-for-everyone-else license isn't going to convince them that it does, it's just going to convince them that they are better served elsewhere.

> If you want customizations or new features, that's when it makes the most sense to 100% self-fund.

I would argue that it makes sense to 100% self-fund the additional work whenever you want something more than the open-source offering provides and doing so is less expensive than commercial (off-the-shelf or bespoke) solutions, whether the additional work is basic maintenance or something else, and if you are using an open source solution and it's not maintained adequately for your need, the responsibility for addressing that isn't on other users that you’d like to have subsidize your use.


This is totally incorrect about the licensing models of open source.


I can only guess what you might think is wrong, but if you think it is the description of the proposed license as not-open-source, I would direct you to paragraphs 5 & 6 of the Open Source Definition.

https://opensource.org/osd-annotated


That is a recipe for an ambitious PM in the corp to just make their own version, tailored for their 1 of 10 companies in the world needs. Once you start getting that big, build starts becoming cheaper than buy for critical infra.


Upvoted, but not sure it's a tragedy. Much like that quote about democracy, it's a bad system, except the others are worse. Would be nice to have something better tho.


> the others are worse

I think we need more experimentation with solutions to the (open-source) public goods problem before we can say that the others are worse. Ditto with experimentation on variants of democracy. Significantly harder to experiment with that than with open source funding though.


Open source is pretty successful if you include the plethora of knowledge it provides to new developers. It is difficult to quantify and it is symptomatic that there is litte prestige in commiting to any open source project.

I don't really get the democracy comment.


I would think experimenting with variants of democracy would be a more frequent event. Consider the small scale class president or family voting for a movie.


My experience with Hacker One is almost entirely negative and I don't understand why it has such mindshare.


What does "broken" mean here? If the development team is unresponsive, what do you expect H1's response to be?


H1 used to have a mechanism where researchers could push to make raised issues public if a ticket was ignored or marked as wontfix.

That was a good way to keep companies honest, an implementation of responsible disclosure.

So H1 could implement that again. It doesn't get them a bounty but it does stop companies pretending reports don't exist, if that's what has happened here.


If you have a Squid RCE, what prevents you from getting a CVE for it, writing a blog post, and announcing on Twitter?


HackerOne has threatened to ban researchers who disclose responsibly.


H1 is somewhat unlikely to ban someone who holds a real RCE in Squid for months and then publishes it, because H1 needs those people on its platform. Most H1 bounty people are just running scanners to find DKIM quirks.

I think the conversation about whether H1 is problematic or not is a fine thing to have at the top of the thread. I can see people going either way on that question (bear in mind that it has as much to do with idiosyncrasies of each of H1's customers as it does with H1 themselves).


Remove that bogus program from their plattform at the very least?


Hi, if you’re at all interested in discussing this (including if you’d prefer your name not be disclosed) please email David.morris@fortune.com. Thanks.


HackerOne is complete fraud. They've got a super duper simple carrot before the horse business model which has thousands of kids beating up web apps for free. A valuable service for their Fortune 100 clientele; for the people actually doing the work for them, not so much.


From PayPal's response to a 2FA bypass:

> If the attacker has the victim's password, they would already be able to gain access to the account via web UI too. As such, the account is already compromised. As such, there does not appear to be any security implications as a direct result of this behavior.

Seriously? This means PayPal's 2FA is just security theater. I'd rather they didn't offer it at all in this case, at least then I'd know how insecure my account really was.


From reading a different article, the terminology seems to be a bone of contention here. This ’2FA' is an email message PayPal send when they detect a new login location. They do not call it 2FA and they do offer actual 2FA that cybernews have not bypassed.


It's very obviously a distinction without a difference though. Like the authors say, this is a amazing opportunity for black-market paypal account buyers. It's the only line of defense that thousands of people have between black hats and their bank account. In any case, I'd definitely call this 2-factor authentication - the only difference is the trigger (every login vs suspicious logins). It just so happens that they have different code for each of those two cases, and these bounty hunters have discovered a bug in one of them.


I don't think I can agree that there's no difference, because if you say to me that you've bypassed PayPal's 2FA I'm going to think that you've bypassed the opt-in one, not the extra security check one. PayPal does not consider these accounts to "have" 2FA.

Overall that may be being too pedantic, and shouldn't give PayPal a pass on the issue. Perhaps even entertaining it is just muddying the waters allowing PayPal to slip away. The extra check is a security check. If cybernews have by bypassed it then they have bypassed a security check. Logically this is therefore a security issue, and if PayPal are saying that it's not a security concern then they're saying that they were just wasting everybody's time with the unnecessary check to begin with. That would clearly be a lie, as the fact that they developed and continued to use the system indicate that they think it provides security.


There's a huge difference between an informational email "somebody just logged into your account, was it you?" and 2FA workflow which does not let you log in without entering proper code. The latter is a security feature, the former is at most auxiliary informational feature.

> I'd definitely call this 2-factor authentication -

You'd be misunderstanding what "authentication" means then. Notification and authentication are different things. Email is notification, not authentication. Confusing it means either not knowing what authentication is, or purposely confusing matters to present issue as something it isn't.


To be clear, the extra check that is bypassed is not merely an informational message, the system sends you a message and you are supposed to have to enter something contained in that message in order to continue from that IP address/computer.


OK if it blocks login then it's at least partial 2FA for those logins. I thought it's only informational judging from the Forbes article but if it's not then it's part of the auth workflow and thus can be regarded as 2FA.


Could you link that article? Because the screenshot in this article pretty clearly shows PayPal sending an SMS to the user's phone.


https://www.forbes.com/sites/zakdoffman/2020/02/22/paypal-cr...

I should note that I haven't really investigated this so I don't claim to know any truth.


I was about to dismiss the article thanks to lines like this:

> In essence, it would work with phished credentials just as well as with stolen ones

But, sure enough, it's not the opt-in 2FA, triggered on every login, that was bypassed, but the 2FA checks triggered when PayPal detects suspicious activity. As far as I can tell, if you've enabled 2FA yourself, this bypass won't work. Thanks for the link! Going to go make sure I've enabled that...


I went to enable the opt-in 2FA in response to this report. It's pretty rough, IMO. It gives you no way to use scratch codes as a backup. You're stuck with either adding a second TOTP device or allowing SMS as a backup.

Adding a second TOTP device is OK security-wise but adding a second device to my safe and making sure it's still working periodically kind of sucks.

SMS is not OK.

Printed scratch codes would beat the snot out of either.


You can make a backup by saving in some safe place a copy of the QR code and/or the 16 character text code Paypal gives you to set up your TOTP device.

You can then use that later to set up a replacement TOTP device if something happens to your first one.

I usually use "grab" on my Mac to save a copy of the QR code as a PNG, encrypt that, and save it in an offsite location.

Another popular approach is to print the QR code and save the printout in a fireproof sale. If you do that, I recommend printing it before you use it to set up your first device, and then set up the first device from the printed code just to make sure the printed code is fine.

If you save the text code, you can also use that with oathtool from oath-toolkit [1] to generate the TOTP code on the command line if you need to use Paypal before you have your replacement TOTP device.

Note: if you do want to have two TOTP devices set up at the same time, there are two ways to do this with Paypal. One way is just to scan the same code in both devices. You can either set them both up at the same time, or add the second one later using the backup you made of the original code.

The other way is to go to Paypal's security settings and explicitly say you want to add a backup TOTP. It will then give you a QR code to scan. That is not the same code as it gave you for the first device. The codes generated from the second device initialized from that second code will not be the same as the codes from your first device.

I have no idea what the user interface is for logging in when you have two devices generating separate TOTP sequences. Does it expect you to use the first device, and if that fails ask you to try a backup? Or does it just accept codes from either? Or something else?

Offhand, I can't think of any compelling reason to prefer your two devices to have different codes, or for Paypal to need to know that you are using two devices. Just setting them up with the same code and letting them appear to the be the same device as far as Paypal is concerned seems simpler to me.

[1] https://www.nongnu.org/oath-toolkit/


Capturing the QR code is a good idea. I was reluctant to do that without verifying that there wasn't some time-based element to the QR itself that would make it hard to use when restore time came.


reading both, looks to me like this is pretty much 2fa. isn't 2fa defined as a "second factor" beyond user:pass?

isn't that what this bypass is about?


There is genuine disagreement about whether email qualifies as a second factor. As it is often just protected by a username and password the argument is that it's the same "something you know" factor as a password, or just an obfuscation of the same factor.

I will say, that if cybernews have done what they say that they've done, and PayPal are claiming that it's not a concern, then PayPal are clearly in the wrong, and that remains true even if we all agree that this isn't 2FA.


In the forbes article cited above, the author says that cybernews showed it to him:

"CyberNews claims—and the company showed me a demonstration—that it can successfully login to an account using basic credentials on a new computer. "

So for now, I'd say they did what they're claiming


> As it is often just protected by a username and password the argument is that it's the same "something you know" factor as a password, or just an obfuscation of the same factor.

All factors are just varying obfuscations of "something you know" when you get down to it though.


it's not just email, its phone also.

i recently recently logged inco company paypal from out of country and paypal complained it wants to confirm account via email, fine i confirmed. and then it said it also needs to conform the via phone. ie a call.

so it is a form of 2fa.

can i also complain how is 2fa a pain if multiple persons use that account. you cannot enable it if they allow only one user per account. there are workarounds where there are mutiple 2fa methods and i use the app and other person sms.


I haven't looked at PayPal specifically, but if it's a standard authenticator app can you not both set it up via the QR code when you enable it while everybody is present?

However, obviously the real answer is to add multiple users to the same paypal account, which apparently you can do with a PayPal Business account.


yeah, you are right, for totp it should work.


also - this image: https://cybernews.com/wp-content/uploads/2020/02/security-ch...

looks like it's not email as the second factor, but your device via SMS


If PayPal says it is not a security issue, the researcher should just publish the details of how it's done.


Actually, PayPal recently added proper TOTP 2FA.


Does this mean I can now add PayPal TOTP easily? I've got an existing key in Authy[0], but I'd like to move to a different authenticator app

[0] https://github.com/dlenski/python-vipaccess emulates the Symantec VIP app, allowing you to provision a secret key, then export it to a different authenticator app


People have a weird mental model of how big-company bug bounty programs work. Paypal --- a big company for sure, with a large and talented application security team --- is not interested in stiffing researchers out of bounties. They have literally no incentive to do so. In fact: the people tasked with running the bounty probably have the opposite incentive: the program looks better when it is paying out bounties for strong findings.

Here are the vulnerabilities in their report:

1. They can suppress a new-computer login challenge (they call this "2FA", but this is a risk-based login or anti-ATO feature, not 2FA).

2. They can register accounts for one phone, then change it to another phone, to "bypass" phone number confirmation.

3. There are risk-based controls in Paypal that prevent transactions when anomalies are detected, and some of them can apparently be defeated with brute force.

4. They can change names on accounts they control.

5. They found what appears to be self-XSS in a support chat system.

6. They found what appears to be self-XSS in the security questions challenge inputs.

None of these are sev:hi vulnerabilities, let alone "critical". 2 of them --- #4 and #6 --- are duplicates of other people's issues. Self-XSS vulnerabilities are often excluded entirely from bounty programs.

For the last 3 hours, the top comment on this thread has been an analysis saying that, because Paypal is PCI-encumbered, and HackerOne reports can function as "assessments" for PCI attestations, Paypal is in danger of losing its PCI status (and the fact that it won't is evidence that they are "too big to fail"). To put it gently: that is not how any of this stuff works. In reality, formal bug bounty programs are a firehose of reports suggesting that DKIM configuration quirks are critical vulnerabilities, and nobody in the world would expect any kind of regulatory outcome simply from the way a bounty report does or doesn't get handled. It should, I hope, go without saying that nobody is required to run a bounty in the first place, and most companies probably shouldn't.

The login challenge bypass finding was actually interesting (it would be more interesting if they fully disclosed what it was and what Paypal's response was). But these reporters have crudded up their story with standard bug-bounty-reporter hype, and made it very difficult to judge what they found. I'm inclined not to believe their claim that Paypal acted abusively here (and I am not a fan of Paypal).


For #5 I believe it's not just a self-XSS, but also executes on the support agents browser, allowing you to potentially exfiltrate their cookies:

> Anyone can write malicious code into the chatbox and PayPal’s system would execute it. Using the right payload, a scammer can capture customer support agent session cookies and access their account.


Yeah, they probably should have included a POC of the attack on initial submit. That one got patched after the N/A. That's pretty sad.

For example, under example quality reports, POCs are provided

https://hackerone.com/reports/32825

https://docs.hackerone.com/programs/quality-reports.html


> It should, I hope, go without saying that nobody is required to run a bounty in the first place, and most companies probably shouldn't.

Really? Most companies? That seems like an extraordinary claim.

I'm not a security researcher but if I stumbled on some security issue in something that's not open-source and not owned by my employer, the only way I'd consider reporting it is if they have a bug bounty / responsible disclosure program. Otherwise I'd expect it would be about as likely for me to receive a "thank you" as a knock on the door from law enforcement.


That depends on the kind of issue you found and the type of service it was, but, yes: without an authorization of some sort, it's probably unlawful to test for large classes of serverside vulnerabilities. The kind of work Project Zero does, on the other hand, is both more impactful and does not usually require authorization, since they're analyzing software running on their own machines.

Most companies should not run bug bounties. Most companies haven't even had a competently run software security assessment (either from an in-house software security expert or from a retained third party). Authorizing serverside tests and soliciting inbound reports from random people is not on the list of "first things you should do to get your house in order", and most people do not have their houses in order.

If this sounds like an extraordinary claim, I'd suggest maybe paying more attention to software security people and less attention to Reddit and HN stories about bug bounties; it's easy to get the wrong impression from message board threads, and as you can pretty plainly see, a lot of commentary on message board threads isn't well-informed.

Katie Moussouris is maybe a good starting point if you want to inject the "bug bounties can be bad" take directly into your veins. But there are lots of other people to listen to; it's a mainstream take. If you want a pro-bounty take, you can read what Cody Brocious writes. My (mainstream) take isn't the only decent take.


> I'm inclined not to believe their claim that Paypal acted abusively here (and I am not a fan of Paypal).

I agree that they have some issues with the way they've reported it, and I agree with your numbered points except that they imply that #5 may make the support agent vulnerable, but I'm not sure you can say PayPal haven't acted abusively. Many of the reports are legitimate vulnerabilities even if they aren't critical ones. The first is clearly a security issue yet PayPal have said that it isn't. In return they have received nothing but a reputation hit, and this is clearly unfair.

Do PayPal specifically say that anything involving stolen details are out of scope? This seems a bit weak considering they have numerous systems in place to combat misuse of stolen accounts. And even if they do it doesn't explain #2.

edit: To answer my own question, the page at lists "Vulnerabilities involving stolen credentials or physical access to a device" as out of scope for web applications. They likely intend that to apply to mobile applications also, but they've structured the page in a way that makes that ambiguous.


Do PayPal specifically say [...]

This is why this article is a bad HN submission - it's not really on everybody on HN to figure out whether these reports are any good, whether they were handled correctly by PayPal, HackerOne, etc. It's up to the people writing them up to make this as clear as possible and they don't come anywhere close to that. This just creates a massive discussion driven by speculation and off-topic tangents about a problem people had on ebay and talmudic regulatory 'analysis'.


The page I failed to edit in: https://hackerone.com/paypal


What proof do you have that #6 is not persistent XSS? If it is, that's a potentially brutal vuln (as persistent XSS often is), even if you need the users password to exploit it.

And persistent XSS is definitely not out of scope according to PayPal's guidelines. https://hackerone.com/paypal

Why are you saying #6 --- are duplicates of other people's issues. ? It must have been marked as dupe of N/A. They would haved gained rep if it was a dupe of someone elses report. They lost rep, so it was most likely marked as dupe of an N/A.

As I mention below, the big problem is the OP didn't include POCs. It's easy to claim "oh this is can be exploited so easily" but without a POC, it's not always clear and perhaps he missed some detail that made his assumptions incorrect.

Anyways, I do have to say hackerone looks pretty cool. This is the first I've seen it and they seem like they are working very hard (we all should be working hard) to make this work for everyone. They are likely just victim's of their success.


I agree that real stored XSS is a serious issue. Here, they're using a MITM to get the XSS payload injected, and Paypal has closed it saying it's not "externally exploitable". It sure looks like self-XSS to me. I agree: a POC would clear this up.

All I'll say beyond that is that if they had doc'd a real stored XSS bug in Paypal, my assumption would be that they'd get a bounty for that. That they did not get a bounty for it suggests that it was invalid. Paypal does not have any incentive to stiff researchers on valid submissions; they have in fact the opposite incentive.


Uh, some of these vulnerabilities are critical. And just because corporate signs up for a HackerOne bug bounty doesn't mean that the security engineers managing triage are happy about it.

Security analysis and penetration testing always results in the perception that the security auditor is calling their baby ugly. Always.


(remove message)

Sorry, on further thought while I still disagree with the analysis above as being overly dismissive, I think the OP may share some blame for not writing higher quality reports with POCs. Also, the OP doesn't explain whether or not they saw the original reports for those marked Duplicate. That's a very critical point. See here -

https://docs.hackerone.com/programs/duplicate-reports.html

For anyone actually interested here and not just drive by commenting (like me, ahem), it's worthwhile looking into the platform in more detail. See my post below -

https://news.ycombinator.com/item?id=22406372


I don't understand your argument here. "You patch you pay" is not a market term on any bug bounty; people report sev:infos all the time that ultimately get patched, but aren't worth anything (this is why some bounty programs stock sticker and t-shirt SWAG, to placate these submissions).

Meanwhile: I don't care even a little bit how Paypal arrived at their "duplicate" response, because Paypal has no incentive to deny a bounty for a valid bug. Like I said above, they have the opposite incentive. Duplicates happen all the time. If Paypal --- or any other large company --- says it was a duplicate bug, it would take extraordinarily clear evidence for me to believe otherwise.

Some of these things are probably not true for fly-by-night companies that set up bounty programs (a lot of people run bounties that shouldn't). I'm not denying that there are random companies that do ruthlessly screw with bounty submitters; I don't know any of them, but I believe they exist. But the money Paypal spends on bounties, all put together, barely even qualifies as a rounding error. They do not care; nobody serious cares enough to squelch reports to avoid paying bounties.

The fact that this was reported as "six critical vulnerabilities" is enough for me to tilt the credibility scale in the other direction.

Later

I'd appreciate it if you didn't edit your comment out from under my reply; the convention on HN is to update your post to clarify your argument in a PS, not to simply delete the parts you felt didn't hold up to scrutiny.


Oh, sorry about that. Bad habit. Had hoped to get to it before you wasted your time.

That all said, I think you have a knee jerk reaction (given your history) to side with Large Corp. It shows here and really felt like that. Way overly dismissive and condescending.

Having personally worked for large corporations, I can say that the "it's not personal, it's business" motto is pure evil bullshit better suited for mob.

If you (the royal you) can't treat people with respect that they deserve, don't engage until you can.


My history? I haven't worked for a "large corp" since 1998. Our company works exclusively with startups.

When I say that Paypal has incentives not to ruthlessly deny bounties, I mean actual incentives, not "it feels good to do good" type stuff. Even if their reputation among bounty hunters is factored out: they literally have an incentive to pay bounties. That's the metric by which bounty programs are judged.


I agree -- the tone of the article was cloak-and-daggers, which makes me think things are not what they seem they are. Unless we fully understand the exact set of issues, it is difficult to decide either way.

Sadly, this also undermines trust in the overall state of "security research", which most of the time, borders on being silly. :-/


This is on the more-competent end of the spectrum of bounty submissions, for what it's worth. Because the median bounty submission is very, very bad.

These people at least appear to have done some actual work. Paypal is probably one of the most overfished ponds in application security, and they didn't come up with much, but it's at least sort of interesting.


out of curiosity, do you work at PayPal or is the first paragraph all assumptions?

One would have thought Wells Fargo had a talented team of people to catch their millions of fake accounts they made, but alas it went on for a decade. I will always assume companies have their backs turned to security, until proven otherwise, regardless of size or perceived risk.


First, I do not work at Paypal, and have never worked at Paypal.

Second, if I did, it would be none of your business.

Third, comments like these are forbidden by the site guidelines, which demand that you not make accusations of astroturfing simply because you disagree with a comment.


Charitably interpreted, it's 'just' an accusation you can't possibly know the quality of PayPal's security team since you don't work there and lack the necessary insider knowledge. Calling you a naïf, not a shill!


You're right.

For clarity's sake: anyone with a significant number of acquaintances in SFBA appsec knows people working appsec at Paypal or their subsidiaries.


I think there is a subtle but real difference between claiming astroturfing versus claiming conflict of interest.


exactly.

FWIW. I wasn't claiming either, I was questioning source of knowledge, as the claim seemed to be factually informed, but are in fact just the posters opinion on the matter.


i am not assuming anything nor making any accusations. I am simply inquiring as to the source of the claim made. i guess, we will establish that its just the opinion of the poster.


> 1. They can suppress a new-computer login challenge (they call this "2FA", but this is a risk-based login or anti-ATO feature, not 2FA).

2FA means 2 Factor Authentication. This works by forcing one to use two different forms of identification to authenticate, such as login/password and, in this case, identification of the computer used.

So, with all respect sir, what I'm saying is while this isn't the best 2FA, it absolutely IS 2FA by definition.


No.


> No.

Please explain which parts of my comment are false?

Thank you.


This feature is not 2FA, and your argument is incoherent even if you fix the terminology, because many anti-ATO systems are heuristic and intriniscally "bypassable" by design, and yet you still want services to have them. ATO is an arms race.


This anti-ATO mechanism is asking for a code delivered via email (something you have) in order to grant access to the account. That is authentication via an additional factor, i.e. 2FA.


If you implemented this feature and called it your "2FA system", security engineers would laugh at you. It's clearly not 2FA. And, of course, Paypal has actual 2FA.


> This feature is not 2FA

> anti-ATO

ATO [1] is authentication by definition, but again, depending on how it's implemented, not usually the best form.

[1] https://csrc.nist.gov/glossary/term/authorization-to-operate


Authorization is not authentication, by definition. Furthermore, your link is talking about an entirely unrelated meaning of ATO. I believe tptacek meant it to stand for "account take-over".


You're right; that's what ATO means here.


Same difference. Anti account take over and account authentication, because similar methods would be deployed (i.e., multifactor authentication, heuristic, etc.)


At risk of pointing out the obvious, ATO (as in Authorization To Operate) has nothing to do with logging in, technical authentication, or technical authorization. An ATO is a piece of paper or equivalent that lets your business deploy a product or solution, primarily used in the government space. It’s a contract that a human/organization signs, not a part of the login process for a computer.


No, obviously not.


This doesn't surprise me. I'm currently trying to get a refund out of PayPal after what looks like a massive flaw in their refund process. I paid for something on eBay and it appears to have been a compromised account. The original auction, feedback history, etc, looked legit. The flow was this:

1) I pay for a product on eBay using PayPal, using my creditcard (direct from card, not from any existing PayPal balance).

2) Seller marks item as shipped but then 5mins later issues an e-check refund (rather than a refund on my creditcard).

3) Seller cancels and deletes the original item on eBay so i can no longer raise a dispute there.

4) The e-check refund continues to bounce as clearly the compromised paypal account can't pull those funds from the other source.

5) The refund being in limbo means my dispute with PayPal gets closed as "a refund was previously issue" (which did, and will continue to, bounce).

The important part is 2 - since I paid for this on my card the refund should have gone direct to my card. However, since I paid for this on my creditcard I've raised a chargeback with the issuing bank, which should hopefully make PayPal sit up and put a bit more effort into sorting this out.


I've raised a chargeback with the issuing bank, which should hopefully make PayPal sit up and put a bit more effort into sorting this out.

Or just close your account and ban you.


Possibly, a blog post will follow if that happens. PayPal has aways been a firewall around my creditcard number and I've never linked any other current account for pulling funds as, having worked in the payments industry, I know what a shit show it can be and that (in most cases, especially like this) the creditcard issuer will stand with the cardholder and not the merchant.

Now i'm using other methods to pay for most e-commerce transactions: one time PANs, a distinct debit account that I keep a minimum amount of funds in for this stuff, etc. So PyaPal are no longer seeing anything like the level of use they once did from me. They can ban my account if they want, the issuing bank have already said they will proceed with the chargeback if PayPal don't issue a refund.


What may save you is terms on the card about refunds needing to be credited back to the card - pretty sure that used to be the case and probably still is. I think it was originally to prevent under the table cash advances, but also to protect merchants from giving refunds then being hit with chargebacks. If the bouncing e-check was done through PayPal as well then they may have screwed up by allowing it.


Try Privacy.com for a better creditcard firewall


I've been bitten before by the fact that if you don't use PayPal, eBay's interest in helping you with a refund dispute is exactly zero. And now I learn this. I guess PayPal + credit card is the way to go if you want any chance of a successful refund.


Unless you're getting boned with pseudo credit cards which are debit cards in disguise.

Direct banks such as ING or DKB in Germany are offering those cards but dear god if you have a dispute. Money is gone from your checking account right away and you don't get the convenient fraud protection of actual CCs.


You put in way too much effort. Call your credit card company first. Your credit card company profits from vendor (PayPal) mistakes by charging fees, so they are always happy to help you.


calling the Card issuer is always my first stop. CS these days is abysmal at most companies.


Doing a chargeback tends to burn any future business with the company--a big deal when you have a long standing account with them.


Unfortunately, any platform with any kind of lock in will have you over a barrel here.

Try disputing a card transaction with Steam - your 18 y/o account with thousands of games and dollars invested will be gone in a flash. Same goes for Google/Amazon/Microsoft/etc.


yikes, ive never tried with any of those, but that would certainly suck. in my experience most platforms have a pretty fair compliant resolution policy though.


PCI DSS requirements specify that companies have 30 days to refute or remediate externally reported issues [1]. If they don’t respond or fix some of these issues, then PayPal will no longer be compliant and all credit card companies will be forced to stop working with them unless they wish to set precedence that PCI-DSS compliance is no longer required to be followed.

According to this image [2], they did not respond or refute within 30 days.

If PayPal’s PCI-DSS compliance certification isn’t revoked then PCI-DSS is a farce.

[1] https://www.itgovernance.co.uk/blog/a-guide-to-the-pci-dsss-...

[2] https://cybernews.com/wp-content/uploads/2020/02/paypal-2fa-...


> PCI DSS requirements specify that companies have 30 days to refute or remediate externally reported issues [1]. If they don’t respond or fix some of these issues, then PayPal will no longer be compliant and all credit card companies will be forced to stop working with them unless they wish to set precedence that PCI-DSS compliance is no longer required to be followed.

Quote from your source:

> If your scan fails, you must schedule a rescan within 30 days to prove that the critical, high-risk or medium-risk vulnerabilities have been patched.

Scan in this sentence refers to "a PCI DSS external scan".

The list of approved vendors that can conduct PCI DSS external scans can be found here: https://www.pcisecuritystandards.org/assessors_and_solutions...

Please find cybernews' certificate number there and quote it for us, I have looked and can't find it.

I would guess that, contrary to your implication, they are not an approved scanning vendor. If this is the case then it really does not speak to the characteristics of PCI-DSS and your comment just seems wrong.

And even if they were an approved scanning vendor, from what little I know about PCI-DSS, these scans are part of larger process - so even if they were an approved scanning vendor the scan failure would still have had to be part of the larger process for this 30 day limit to apply.

I could go on and on about how much I hate PayPal and random other things, but just because I don't like something does not quite justify making false claims about it.


> I would guess that, contrary to your implication, they are not an approved scanning vendor. If this is the case then it really does not speak to the characteristics of PCI-DSS and your comment just seems wrong.

Actually this makes a pretty good case for this regulation being a joke. They clearly aren’t up to the responsibility of being a payment processor and are leaning on the law to sustain their business rather than simply demonstrating aptitude directly.


Why does the regulatory body get to approve who and what can scan implementations of their security scheme? It seems like the ideal auditor and scanning software, in PCI DSS's eyes, would be the one that just barely checks the boxes for minimum security requirements. Poking too hard at their security scheme would reveal how lackluster it is but they still need someone to poke at it to prove compliance. Being able to ignore anyone or anything that isn't on the approved list seems like willful negligence.


> Why does the regulatory body get to approve who and what can scan implementations of their security scheme?

Because it's a scanner for PCI-DSS compliance, not a scan for security issues.

They do not fear that unapproved scanners will be more strict than approved scanners, they fear they will be less strict.


Because when you make the rules you get to make the rules?


It works better this way around than the other way around - which would be that PayPal gets to pick who audits them with no oversight. You can imagine how thorough that audit would be, and how many times it would find any problems.


> Actually this makes a pretty good case for this regulation being a joke.

PCI-DSS is not government regulation, but an industry created and enforced standard. Compliance is not mandated by federal law and only a couple of states have laws that reference it. For example, Nevada requires compliance while Washington doesn't require compliance but does remove liability for breaches for compliant businesses.


I meant it in a looser sense (like “constraint” I guess), but it just reinforces the reputation that PCI-DSS is a checklist you can buy, not meaningful itself.


They "clearly aren't up to the responsibility"? Paypal has one of the larger application security teams in SFBA. You've decided they're not qualified because someone reported a self-XSS and Paypal didn't freak out?

Did you read downthread about the actual "2FA" feature this team "bypassed"?


> Paypal has one of the larger application security teams in SFBA.

It's not the size that matters, but how you use it that counts.


It's not a regulation. It's a contractual obligation between the merchant and the PCI counsel (which is made up by VISA/Mastercard/the backing banks/etc). It was put in place to avoid regulation.


Then perhaps regulation is necessary if this is their level of scrutiny?


Yeah, but in this political climate, we probably can't get regulation passed that isn't written by the companies it regulates.


That will never happen under any circumstance.


regulation: noun. a rule or directive made and maintained by an authority.

Regulations can be self-imposed on an industry, it does not have to be something the government imposes. Calling PCI-DSS a regulation is still accurate despite many people conflating the term "regulation" solely with government action. In this case, the authority creating the regulations is the PCI Security Standards Council who have their power because the big players in the industry give it to them.


Sorry, technically correct is best correct.

Don't look behind the curtain.


PCI-DSS is a joke, just look at all the zero days that have gone on before today, Comodo the CA hacked, DigiNotar to name a few, the recent zero day in Windows hilighted by none other than the NSA back in Jan. The public have ADHD attention spans, so who cares as long as the money keeps rolling in hey? Do you think your politicians, law enforcement, big businesses or Banksters give a toss? Criminals rule the world and its been going on for thousands of years with people believing in things like Religion and Royalty!


HackerOne states they are a PCI-DSS auditor approved organization [1].

[1] https://www.hackerone.com/product/challenge


Sorry, but you don't understand what you are looking at.

All of HackerOne's information that you cite is about them being PCI-DSS-compliant or having undergone a SOC2 Type 2 audit. Nothing you link to identifies them as a PCI-DSS auditing company. They are not.

And the "scans" the PCI-DSS standards refers to are standard pen-test and external vulnerability scans, usually conducted by an accounting company who will certify the scan results. They are for known vulnerabilities, things like the version of Apache you are on, etc. None of the reports sent via HackerOne would qualify as a "scan" under PCI-DSS.


> All of HackerOne's information that you cite is about them being PCI-DSS-compliant or having undergone a SOC2 Type 2 audit. Nothing you link to identifies them as a PCI-DSS auditing company. They are not.

Please read the page again. They specifically say you can achieve compliance certification with HackerOne.


You achieve that compliance by paying HackerOne, as a company, to perform a compliance scan. This does not mean any swinging dick that reports a vulnerability through HackerOne is causing PayPal to fall out of compliance. These scans are planned well in advance and are part of a normal audit cycle. (edit: typo)

On top of that, there's not really any legal issues for being non-compliant, as has been pointed out elsewhere in this thread.


As someone who deals with PCI-DSS compliance in fintech land on a daily basis this thread is showing me there are a lot of people who like to crow on about stuff they don't know a thing about.


You must be new here.


Indeed. However, it's refreshing to see a HN thread that's defending vendor snakeoil instead of assuming all infosec is vendor snakeoil.


We read the page, and even if your claim holds, it is still irrelevant because whatever you quoted is not the same as being a PCI-DSS approved scanning vendor. And even if it was, HackerOne did not perform any scans.

HackerOne offering PCI-DSS approved auditor approved challenges gets you nowhere towards the claims you made in your first comment.

To review:

1. HackerOne would have to be a PCI DSS Approved Scanning Vendor - they are not AFAICT, neither is the CyberNews research team that did the scan AFAICT.

2. HackerOne would have to have conducted the scan - they did not. The CyberNews research team did.

3. The scan that HackerOne did would have to qualify as a PCI-DSS external scan - which ... do you get the part that HackerOne did not do the scan here or not? And nowhere did the CyberNews research team claim they performed a PCI-DSS external scan.

Please at least try to make an argument for your claims


“SATISFY COMPLIANCE CERTIFICATION REQUIREMENTS

Meet pentest requirements for PCI DSS, SOC2 Type II, and HITRUST compliance certifications.” [1]

[1] https://www.hackerone.com/product/pentest


Can you remind me again why hackerone is relevant here? Who claimed where that they performed a PCI DSS external scan that failed?


The page only says that they do external security scans that other companies who do the actual certification recognize as valid scans. They certify no one themselves.

Further, that has absolutely nothing to do with anyone reporting vulnerabilities through HackerOne. That is not a scan by the definition of PCI-DSS, the SOC2 trust services criteria, or any other security framework you care to name.

Just give it up. You're wrong.


> HackerOne states they are a PCI-DSS auditor approved organization

Not anywhere on the page you linked. And a "PCI-DSS auditor approved organization" is not a "PCI-DSS approved scanning vendor" which if they were you could just quote the certificate number instead of link to HackerOne.

----

EDIT: I guess you are referring to this:

> Meet penetration testing requirements for PCI DSS and SOC2 Type II compliance certifications with our auditor-approved penetration testing methodology and Security Assessment Report.

This in no way is the same as claiming "we are a PCI-DSS auditor approved organization". Which again, would be irrelevant if it was the case.

----

Further, if you read the article, it is clear the "We" does not refer to "HackerOne".

> When we pushed the HackerOne staff for clarification on these issues, they removed points from our Reputation scores, relegating our profiles to a suspicious, spammy level.

As far as I can tell "We" refers to cybernews.com

And again even if cybernews was a PCI-DSS approved scanning vendor it would still have to qualify as an official external scan within the PCI-DSS framework.


> Not anywhere on the page you linked.

Read the page carefully - it specifically states they are an auditor approved org.

Quote from page: “Meet penetration testing requirements for PCI DSS and SOC2 Type II compliance certifications with our auditor-approved penetration testing methodology and Security Assessment Report.[1].”

Secondly, PayPal works with HackerOne officially [2] and within the CVSS standards as they clearly state on their HackerOne page, which is complying with PCI DSS.

[1] https://www.hackerone.com/product/challenge

[2] https://hackerone.com/paypal

Edit: Archived incase:

http://archive.is/CvZqg

http://archive.is/GGDs2


Even if HackerOne were a company that is licensed to do PCI DSS scans, they were not contracted by PayPal to do it. A PCI DSS scanning company cannot just do independent audits for PCI compliance unless solicited by the target. The auditing process you are referencing is not related at all to reports by unsolicited scanners.


What is their certificate number?


> If PayPal’s PCI-DSS compliance certification isn’t revoked then PCI-DSS is a farce.

This comment chain has convinced me that PCI-DSS is a farce.


I'm not sure that you being convinced by someone who doesn't even understand that it was not hackerone that found the vulnerabilities says much about PCI-DSS


Pretty much. All it really proves is that an org meets a bare minimum of security standards. As noted elsewhere in the thread, it's used more for marketing and to serve as a "hey look at us we're self-regulating within industry!" than anything else.


> Read the page carefully

Emphasis mine.

"...satisfy the requirements for external penetration testing for audited PCI DSS and SOC2 Type II certifications."

"Final Report Delivered. Ready for Auditors."


Yeah but someone reporting a vulnerability to HackerOne is not the same as HackerOne reporting it. Otherwise you could just spam HackerOne with reports and remove someone’s compliance.


Uhm, but weren’t they operating via a bug bounty program? Now they’re supposed to be a registered auditor or whatnot?


Former QSA here....and that external scanning vendor (one in each quarter) and two required Pen Tests per year had not be HackerOne carrying them out. Automatic conflict of interest. HackerOne has a vested interest in a clean scan and making Paypal look good.


> then PCI-DSS is a farce.

Take a wild guess on what you think will happen.


All these regulations are bs. Designed to keep small players out.


I guess you're not at all familiar with the self report nature of PCI audits.

The purpose of PCI is to shift liability


They were created for sincere reasons, and with best intentions. In the real world best intentions always conflict with the motivations of individual players.

It just isn't reasonable that PayPal would be cut off. That was always a toothless threat, at least for larger players.

As an aside, PayPal is a marvel to me because it is effectively lost in time. Using their tools and interface is like stepping back to 1995, and it seems -- from an outsider perspective -- that it must be some duct-taped quagmire that is barely holding on.


> They were created for sincere reasons, and with best intentions.

No? They were created by the industry to avoid actually being regulated and are a way to shift liability.

That doesn't mean they aren't also beneficial, but that's more a side effect than the intention.


Yes?

If we want to be cynical, of course there was a self-serving reason they created the standards -- because fraud, especially "internet" fraud, was on a massive upswing and it threatened this enormous new market of credit card spending. There is no question it's in their self-interest to improve the general condition of transactions.


> If we want to be cynical, of course there was a self-serving reason they created the standards

It's not cynical, it is literally the reason PCI exists.

> Five different programs had been started by card companies... The intentions of each were roughly similar: to create an additional level of protection for card issuers

- https://en.wikipedia.org/wiki/Payment_Card_Industry_Data_Sec...


Protection from fraud. Fraud costs the credit card industry money. It makes people less likely to trust/use them, which then costs them business.

You said -

"avoid actually being regulated and are a way to shift liability"

This is like saying a store put razors in a locked case to avoid being regulated. Or they simply don't want their stuff stolen?

I was being facetious when I said if we want to be cynical, because of course everything any business does is in their own self-interest. Of course it is -- that goes without saying, unless one is just trying to be glum.


> Or they simply don't want their stuff stolen?

Equating credit card fraud to physical theft is silly. The intermediaries of the credit card industry earn revenue by charging fees to process transactions. When fraud occurs, they're only liable if they were some how responsible. PCI allows the network to shift liability to the periphery and to allow the central network to deny taking responsibility for systemic problems with the infrastructure.

To use your razors analogy, PCI is like Gillette shipping razors lose in a box to CVS and telling the store that it is liable if anyone gets cut or the razors get stolen AND Gillette can fine them if anyone gets cut or razors get stolen. But that's not how it works, in the real world razors come with safety covers in tamper evident sealed plastic clam shells.


"Equating credit card fraud to physical theft is silly. "

In both cases someone is out money. It isn't a difficult step.

Fraud costs the credit card industry. It costs issuers (they shoulder 60% of the direct cost), merchants, and it costs the future of the industry because it is a nuisance for end-users.

They make a standard of best practices to reduce fraud. Following those best practices is good for every single participant, outside of criminals. Reducing fraud wholesale is the goal, obviously.

Spinning this in a nefarious fashion is not helpful to anyone, and does nothing but muddies the waters.


No one is denying fraud has a cost or that there's benefit to mitigating it. Mitigation comes at a cost and so there is a cost/benefit analysis performed by stake holders to determine the scope of mitigation employed.

PCI identifies recommended mitigations and imposes penalties for failures, but it doesn't ensure or validate compliance. It simply shifts liability from one stakeholder to another.

No one is spinning this as nefarious, but rather information that should be taken into consideration.


To be fair they are probably not designed specifically for that, the issue is big players are much more likely to have more political leverage.

Or is that one much like GDPR? Crazy fines that only big players can afford, in such a case, that was poorly designed.


GDPR max fine is (iirc) 4% of revenue. So if you are a small fish you will be paying less then the big fish. Also the fines are for wilful failure to comply, if you accidentally broke GDPR then your first offence is going to be more a slap on the wrist then an instant 4%.


Except it says "whichever" is higher, so if they decided to fine you 10 million or 2% of revenue, and your 2% is much lower than 10 million, guess which one you're paying...

> Up to €10 million, or 2% of the worldwide annual revenue of the prior financial year, whichever is higher

See: https://www.gdpreu.org/compliance/fines-and-penalties/


_Up to_. It's subject to various considerations (Article 83):

> (1) Each supervisory authority shall ensure that the imposition of administrative fines [shall] be effective, proportionate and dissuasive.

> (2) [...] When deciding whether to impose an administrative fine and deciding on the amount of the administrative fine in each individual case due regard shall be given to the following:

> the nature, gravity and duration of the infringement taking into account the nature scope or purpose of the processing concerned as well as the number of data subjects affected and the level of damage suffered by them;

> the intentional or negligent character of the infringement;

> any action taken by the controller or processor to mitigate the damage suffered by data subjects;

> the degree of responsibility of the controller or processor taking into account technical and organisational measures implemented by them pursuant to Articles 25 and 32;

> any relevant previous infringements by the controller or processor [and other specified criteria]

> (8) The exercise by the supervisory authority of its powers under this Article shall be subject to appropriate procedural safeguards in accordance with Union and Member State law, including effective judicial remedy and due process.

They can't just arbitrarily decide to fine you the maximum.


GDPR breach fines are discretionary rather than mandatory. They must be imposed on a case-by-case basis and should be “effective, proportionate and dissuasive”.

Fining a small mom and pop site 20 mill (20mil/4% is the highest fine depending on the case) is not proportionate, not effective because I would like to see them actually collect on that and I would say such a fine to a mom and pop would be dissuasive of doing business at all which is not that the ICO in the U.K. would want. Speaking of the ICO, their big fine (fucking auto correct) to BA for shockingly bad security earned them a 1.5% fine instead of the max 4% because they worked with the ICO (but the ICO still found they failing in a duty of care to protect data) and have been pushing the fine down the road ever since it was issued, atm the earliest ICO will actually fine BA is next month and it’s been almost a year since they filed their "intent to fine".

So while they can throw around heavy fines. It’s not like they run every mom and pop site out of the country.


>I would say such a fine to a mom and pop would be dissuasive of doing business at all

Welcome to the EU. They've pulled these stunts before. When they introduced changes to digital VAT collection the lawmakers "forgot" that VAT has exemption thresholds. This effectively barred some small and micro businesses from selling their digital services/goods to other EU countries, because the business would not have been exempt from VAT afterwards. It took the lawmakers years to implement a minimum VAT threshold.

>It’s not like they run every mom and pop site out of the country.

Of course they won't, because people want to do business. There will always be more businesses that get started. The question is whether there will be fewer businesses started because of the regulation. So far analysis after GDPR points to yes.


If you want to get a feel for how GDPR fines vary, https://www.enforcementtracker.com/ keeps a list.


They key part there is "if they decide to fine you...".

The max(€10m, 2%) and max(€20m, 4%) are the most that supervisory authorities may issue as fines.

But supervisory authorities have a legal duty to issue fines that are proportional which means than unless you breach the GDPR in a wilful and egregious manner you're unlikely to be fined that much (and if you are you can appeal the fine to a court who would reduce it to a proportional amount).


When would it ever be proportional to charge a small business more than 2% if they can never charge a large business more than 2%? Are small businesses, as a general rule, somehow more capable of causing damage than larger businesses?

Is the law as written somehow vulnerable to some legal hack where all my revenue goes through Company A but all my data goes through Company B, so that Company B has a small global revenue despite being extremely profitable to the controllers of the companies?


>Is the law as written somehow vulnerable to some legal hack where all my revenue goes through Company A but all my data goes through Company B, so that Company B has a small global revenue despite being extremely profitable to the controllers of the companies?

No. Who the data controllers are is a matter of fact, not assignment.

To quote the Court of Justice of the European Union in the Fashion ID case (C‑40/17) at paragraph 68:

"[A] natural or legal person who exerts influence over the processing of personal data, for his own purposes, and who participates, as a result, in the determination of the purposes and means of that processing, may be regarded as a controller".

Furthermore, as per that case, multiple data controllers may exist for some processing activities.

So both Company A and Company B may be considered to be Data Controllers and thus both liable.


But since we're talking about small sites and small businesses, how many of these will actually be able to afford to go to court to argue this? In every system mistakes are made and corruption exists. Why word it in a way that seems to increase the likelihood of both?


GDPR only charges big fines to big players.


https://www.gdpreu.org/compliance/fines-and-penalties/

> Up to €10 million, or 2% of the worldwide annual revenue of the prior financial year, whichever is higher

I'm no lawyer, but this doesn't sound like it's just for the bigger players, at the minimum you'd be looking at some fines. At minimum you'd be paying 10 million if you incur that amount of fines. I guess it could be argued the 2% is geared towards hurting the big players.


At minimum you would be looking at "nothing". Between that and 2%/10 million there are many possibilities. Requiring to answer questions, warnings, requiring some changes etc. And even once it gets to the fine territory, getting things from the max end is not something that would always happen. Out of documented 208 cases (https://www.enforcementtracker.com/), there have been 6 fines that exceeded 10 million and another 4 that exceeded 1 million. Median seems to be around 10 000.


No lawyer either and I don't know how to parse the legalese of the actual text [0]: "Infringements of the following provisions shall [...] be subject to administrative fines up to 10 000 000 EUR, or in the case of an undertaking, up to 2 % of the total worldwide annual turnover of the preceding financial year, whichever is higher:" but reading that as a generic "max(10 million, 2% worldwide annual revenue)" seems unlikely to be correct - especially given all the "up to"s in the sentence.

[0] https://gdpr-info.eu/art-83-gdpr/ Art. 83(4)


How is this the top comment on the thread? Do people really believe that failing to respond to a self-XSS report on HackerOne to the satisfaction of the reporter would cause someone to lose their PCI certification?


> failing to respond to a self-XSS report

This really downplays the report or shows a complete lack of understanding.

Getting access to someone's Paypal account which could potentially mean all their credit cards and banks is definitely an issue that needs to be addressed. This in itself should not be reason to lose PCI certification.

However, as the article further indicates [1], failure to respond (or even closing the issue without resolving) is a completely different story.

[1] https://cybernews.com/security/we-found-6-critical-paypal-vu...


It's not at all clear to me what you're saying here. Are you making a case that the whole report all put together is impactful? Or are you actually trying to argue that self-XSS is a critical security vulnerability?


The former; the author's report (short of seeing what was intentionally left out for proper disclosure reason) is credible, and Paypal's failure to respond/remediate the issue is improper.

Getting access in this way to users' financial accounts is absolutely a vulnerability.


They are getting access to the accounts of people who do not have 2FA enabled and whose credentials have been stolen. Every bounty program I've ever paid attention to would close that report. Risk-based anti-ATO systems are heuristic.


> They are getting access to the accounts of people who do not have 2FA enabled and whose credentials have been stolen.

Ok, first, and foremost, don't you think it's a problem if there are stolen accounts? Wouldn't it make sense to visit the .onion site that the author refers to in the article and lock access to all accounts found there?

> Risk-based anti-ATO systems are heuristic.

This is Paypal practicing DID, which is great. What's not great is that the 2FA defense system could be defeated.

> Every bounty program I've ever paid attention to would close that report.

Getting access to a user's financial account and being able to move their money is something I would take serious 100/100 times. I hope you're not paying attention to bounty programs in the financial sector.


no shit PCI-DSS is a farce

it's just there to make people that don't know anything about technology feel better


It's certainly very effective at providing said people with a false sense of security. It's also another buzzword they can utilize to waste people's time during meetings.


It's just like ISO-9001.


exactly, it is simply marketing (/ politics) really


PCI-DSS does not have Bug Bounty requirements. That's referring to ASV scans which have to be run quarterly by a specific list of vendors and then there's a dispute/remediation process.

Their response is dogshit but not for this reason.


My suspicion is that Paypal is now "too big to fail" and will suffer very little consequences if none at all.


"If PayPal’s PCI-DSS compliance certification isn’t revoked then PCI-DSS is a farce."

PCI Compliance is total bullshit and everybody knows it.[1]

[1] https://www.rsync.net/resources/regulatory/pci.html


There is no doubt that the PCI-DSS is a farce.


Are there other cases where PCI-DSS compliance requirements are selectively enforced?


From this article [1], I get the feeling that PCI-DSS has always been selectively enforced.

[1] https://www.anitian.com/the-failure-of-the-pci-dss/


If you are receiving your money or reputation from a platform (like HackerOne) then you are going to be underappreciated, undervalued, and treated like an expense that should be minimized.

Here is what responsible disclosure looks like in 2020 from somebody that has self-worth:

> (Message posted to Hacker One, and emailed to any address you can find, and sent in a letter by mail. Yes mail. Also copied in all those ways to investors of the target.)

>

> Dear Sir or Madam:

>

> I have learned about a security issue in PayPal's service. This includes being able to login to user accounts without the credentials the system is expecting. [Be vague about how exactly it works, but explain the impact.]

>

> I am not an employee or contractor of PayPal and I will publish this on my blog at https://privacylog.blogspot.com to build on my reputation for finding and improving the security of internet systems.

>

> This post will publish on 2020-03-09, which is two weeks from today.

>

> If you are committed to fix this issue before public disclosure, I will be happy to work with you. You can contact me at ...

---

Key points:

- The discussion is about my reputation and values. - I am not demanding any payment (not sure if that is legal). - Set a firm publish date. - This asks them to make a commitment to fix and frames the discussion going forward.

And if they do not get back to you, then when you publish you explain it just like you see in newspapers: "the vendor failed to respond and act on this report when I contacted them by email, social media and paper mail with two weeks' notice".


Moral of story is obvious: Next time sell the exploits on the dark web and skip the blog post.


It's all that PayPal deserves of they get a pass for PCI-DSS non-compliance.


I'm sure that'll be a great comfort to the victims of whoever those flaws are sold to.


Who would you like to be upset with in a case where the black market is more efficient than HackerOne?

If the legitimate channels are not working then the system is broken and you should blame PayPal and HackerOne. Be pissed at PayPal for not making it easier to report real issues. Be pissed at PayPal for not finding the issues themselves.


The failure of the legitimate market causally explains the emergence of a black market.

It doesn’t morally vindicate people who sell exploits on the black market.

This is, I don’t know, kindergarten-level ethics? I’m flabbergasted at the self-serving rationalizations here.


To be clear, I am not personally advocating the use of the black market to sell exploits. I'm saying that the black market may be more attractive in cases like this and that anger at the black market in this case is misguided. The failure is with PayPal and/or HackerOne in this case. When the black market is more efficient that is a failure of the legitimate marketplace and you should place your anger with the legitimate marketplace that failed.


Or, and I know this might be hard to grasp if you're the kind of unscrupulous individual who would sell exploits to criminals, I could also blame the unscrupulous individual who sold exploits to criminals. Are they deserving of a pass for some reason? Fuck them.


Bad Guys exist and we can't ignore them, like it or not. A head-in-the-sand approach is not good for security either. Yes, fuck the bad guys but seriously fuck those that refuse to acknowledge and fix real issues.


> They deemed this issue a Duplicate, and we lost another 5 points.

A dupe costs points?! On bugcrowd you GET points for dupes...


The points associated with a duplicate report depend on the status of the report you get duped to. I assume in this case the original report was Not Applicable.


Oh so an N/A dupe? That sounds plausable.


The policy of the company I worked for was only to dupe to closed issues if those issues were Resolved -- if the duplicate issue was already closed Informational or N/A, we just closed the new one with the same status. This has advantages in avoiding researcher confusion, as illustrated here.

But that was a company policy, not an H1 policy. It's perfectly possible to dupe to a closed issue. (And of course, it's also possible that you get duped to an open issue which is later closed N/A, though that's pretty awkward. You kind of hope for N/A issues to be closed right away, not to stay open for long periods.)

And not duping to closed issues causes other issues -- it meant always having to leave an internal comment citing the other issue that this one was secretly a duplicate of.


Could you could state that the newly reported issue is both duplicate and that the original report was closed as N/A?

Not applicable typically means the reporter is free to try to argue that is in fact applicable, but by stating it's both duplicate and N/A neither the second reporter nor the company will spend further time arguing back and forth, as even if the issue was applicable the credit would go to the original reporter.


What goal are you trying to achieve?

It looks like what happened here was that the issue was (explicitly) labeled a duplicate, and the original issue was (implicitly) N/A, which you can tell if you're familiar with the platform by the fact that the duplicate report cost reputation points.

This achieves the result you mention, that interest in litigating the report further is muted because it's a duplicate. Though you might want it recognized as applicable anyway because of the reputation effects, even if you're the duplicate.

I did once see a company receive a report that duplicated an earlier report that had been closed by mistake. When the new one prompted a reexamination, they reopened the earlier report and duped the new one to it. That struck me as pretty honorable compared to the easier path of leaving the closed report closed and just processing the new one as if it were new.


Seems like a dumb policy unless you can see all previous reports.


I've had plenty of problems with bug bounty platforms and have completely stopped doing them. But most/all of these "critical" reports aren't critical and some of the behavior of their "researchers" is unprofessional at best. There's maybe one legit report here, and that's #2.

#1 "In order to bypass PayPal’s 2FA, our researcher used the PayPal mobile app and a MITM proxy, like Charles proxy."

So you need to be MITM'd and have a malicious cert installed? Yeah... not "critical" and out-of-scope for most places.

For "#2 Phone verification without OTP", look at the messages they were sending. Did they not understand H1's responses? Repeatedly demanding answers isn't a great look. It's not surprising it was locked.

For #3: it requires stolen creds. A "security" flaw that requires stolen creds and brute forcing isn't going to get much traction anywhere.

#4 was a dupe

#5 is a self XSS, no one accepts these

#6 is a stored self XSS and a dupe


I agree that none of these reports could be considered "critical." I also agree that the tone is a bit unprofessional. I'd add that I generally find these publicity pushes using fairly bland findings to be distasteful. All that being said, I'd like to clarify a bit based on my read of #1.

> #1 "In order to bypass PayPal’s 2FA, our researcher used the PayPal mobile app and a MITM proxy, like Charles proxy."

> So you need to be MITM'd and have a malicious cert installed? Yeah... not "critical" and out-of-scope for most places.

In generally, using a proxy to perform a 2FA bypass wouldn't decrease the risk. In this case, the attacker already has compromised credentials, and they are trying to bypass the secondary control. As they are the one authenticating, the need for MiTM isn't a huge deal.

That being said, another point that was made is that the "2FA" they are bypassing isn't actually Paypal's 2FA. Instead, it is a secondary, risk-based validation. A bit of a semantic difference, but important to note that if a user was leveraging 2FA this bypass wouldn't actually get an attack who had compromised credentials access.


> So you need to be MITM'd and have a malicious cert installed?

No. The attacker is the man in the middle to himself, because why are you trusting the client.

> A "security" flaw that requires stolen creds and brute forcing isn't going to get much traction anywhere.

The feature is meant to stop people from using stolen creds.

It does not work.

Given that stolen creds exist, that sounds like a security flaw to me.


As far as I can tell, "it's impossible to use stolen credentials" is not part of Paypal's security model or a promise that Paypal has actually made. It's important to distinguish flaws in a company's security model from additional security controls we think they ought to add.


But why does that system exist at all, if it's not supposed to do something?

It's not "impossible to use stolen credentials", it's "you have to clear this barrier to use stolen credentials". If that barrier is broken, that sounds like a flaw in the security model.


It's analogous to the systems that make your password show up as dots when you type it in. The security model isn't affected at all - dedicated attackers can "break the barrier" by just looking at your keyboard instead - but it significantly mitigates harm by making certain low-effort attacks harder to perform.


A not-for-sure harm mitigation as part of defense in depth is a great thing to put in a security model.

But with the flaw here, the effectiveness drops to zero. The "making certain attacks harder to perform" is basically gone in the scenario where someone is buying a bunch of credentials. That's not good!


Although I rejected many similar MITM reports myself, in this case I think this is valid threat. It's not some random comments or forum site where there's almost no value for attackers, we're talking on pseudo-banking system, where users have usually at even few credit cards hooked and/or some account balance, and indeed there are many places you can buy leaked/stolen stolen credentials. Ability to bypass automatic 2FA by hackers is little alarming for service where users may lost $1000+. This simply should be fixed and some bounty should be paid for it (of course probably not maximum bounty, but still).

#5 and #6 are indeed exaggerated, especially that even if hacker has stolen credentials, and bypassed automatic 2FA, security question won't be displayed on same page users use to confirm payment (to replace e-mail address), or keylog credit card information.


There is plenty of blame to go around beyond the management. Management is always going to deflect, deny, or do whatever to save their face. There must be “architect/lead engineer” level folks whose primary task is to engineer these stuff well. WTF are they doing?

There should be a wall of shame for these (not by person, but by company and group). Next time you get a contact/candidate who “lead the sign-on 2fa management” at PayPal, we will know to be extremely cautious.

There is no “karma” in tech world. People design the shittiest systems in company 1 and then move on to some other role in company 2 and float around taking credit for more and more stuff someone else did.


As someone in management, I will somewhat agree that management too often deflects from their ownership and responsibility, but what you are saying is also a form of deflection, unless you also espouse a kind of paternalistic oversight over architects/engineers that would absolve them of responsibility by simply being mindless executors of management's commands and lead. I suspect that is not something most here would support.

There needs to be a balance, each party needs to play their own role and work in unison. As much as managers need to manage things and largely clear the way for architects and engineers, architects and engineers need to perform their job and role, to which I would argue belongs adhering to industry standards for security as a core aspect.

If there was clear pressure or even overriding of architects/engineers insisting on adhering to standards by managers who were not performing their role of advocating on behalf of or negotiating with architects/engineers, and instead were even sabotaging them and their product, then sure, it's a management failure; but at that point, architects/and engineers should have also even out right refused and revolted against managers or at the very least clearly and expressly voiced their vehement opposition.

As a manager, I would have even stuck my neck out and sided with an architect and engineer rebellion if they were pressured or even asked to sacrifice core requirements. I also understand though that not all organizations have managers that would do that, especially in careerist organizations where managers see people as bodies to pile up to climb the ladder faster.


I am surprised you read it that way. I was justifying what management does - as in the are usually paid to make sure shit doesn’t hit the fan. But the architect/engineer who design these should have made better decisions.


Sound like PayPal business as usual. Crappy company with crappy attitudes. It's fascinating when people spend their time and effort on good causes instead of joining the dark side just to be shown the middle finger instead.


I have not used Paypal since I had to file a dispute over an item I bought on ebay via Paypal. As a response they snail-mailed me a bunch of screenshots of an internal web-app with a bunch of info for someone else, SSN, CC number, address, etc. Everything I would need to do something bad. I called them and they did not seem to care so I called the guy (I had his number of course) but he never answered or responded to my email.

A few months later I got a voicemail from paypal, apparently my original call bubbled up. They asked if I had destroyed the info and to let them know if I had not (I did). Then there was a long pause (I guess they assumed the voicemail was over), and it turned out there were 4-5 people on that call and they then discussed how the call went and whether or not it was sufficient to CYA.

I've not used it since, and I hoped they got their act together (sounds like maybe not).


My experience with PayPal, from dev support to account managers, has been an absolute shit show. They were simply the first to their market and it's hard to kick them out.


It was difficult at first, this happened quite some time ago, but these days it seems there are lots of non-paypal options.


Most turnkey ecommerce solutions (for my case, event ticketing services like Tito and Eventbrite) seem to mainly support only Stripe and PayPal.


> Then there was a long pause (I guess they assumed the voicemail was over), and it turned out there were 4-5 people on that call and they then discussed how the call went and whether or not it was sufficient to CYA.

That's hilarious. Please tell me you kept that recording.


I'd love to tell you that. I had it on my voicemails for a long time but forgot and switched carriers and lost it :(


What does CYA mean? Haven't seen this acronym before.



Unfortunately, for many companies, bug bounty programs have been the best invention in silencing security research and CVEs. They promise the world, beat you down on severity / payouts, sometimes just claim duplicate or known issue with no way to verify, and then block public disclosure. Very frustrating.


paypalsucks has been a registered domain since 2002 for a good reason.


That is a hilarious random fact. Thanks for the info.


>When we pushed the HackerOne staff for clarification on these issues, they removed points from our Reputation scores, relegating our profiles to a suspicious, spammy level. This happened even when the issue was eventually patched, although we received no bounty, credit, or even a thanks. Instead, we got our Reputation scores (which start out at 100) negatively impacted, leaving us worse off than if we’d reported nothing at all.

That seems like a good way to make sure nobody trusts your business. What say you, hackerone? How can anyone trust this business acting against what ostensibly is its core functions.


They had out-of-scope issues closed as being out-of-scope, which automatically lowers their reputation on the platform. The researchers are outraged:

> When we submitted this to HackerOne, they responded that this is an “out-of-scope” issue since it requires stolen PayPal accounts. As such, they closed the issue as Not Applicable, costing us 5 reputation points in the process.

But Paypal's policy really couldn't be clearer:

> Out-of-Scope Vulnerabilities

> Vulnerabilities involving stolen credentials or physical access to a device

( https://hackerone.com/paypal )

If Paypal says "don't send us this type of report", and you send one anyway, are you really surprised when your account gets a warning attached saying "this person usually files low-value reports"?


Yet paypal's policy explictly says authentication bypasses, like the 2FA bypass they showed, are in scope

>Authentication or authorization flaws, including insecure direct object references and authentication bypass

Reading with the context of the other out-of-scope issues. I think they meant that the ability to buy or steal someones credentials is not a vulnerability in and of itself.

>Vulnerabilities involving stolen credentials or physical access to a device

It is a poorly worded and confusing policy. Yet, if I found a 2FA bypass and I read that policy I would conclude that it is in scope and submit the issue.


> It is a poorly worded and confusing policy. Yet, if I found a 2FA bypass and I read that policy I would conclude that it is in scope and submit the issue.

If you wanted my advice as something of an insider to the platform, I'd say that you should point to the ambiguity there ("One policy says yes, another policy says no?") and ask for an Informational close rather than Not Applicable. (H1 hates it when researchers ask for a specific close status, but it's common and often reasonable.) Closing your report Informational instead of Not Applicable costs the company nothing, so even an argument that isn't very strong on the merits can carry the day.

I wouldn't push for a payout, given the out-of-scope phrasing. If executing a successful attack requires you to possess stolen credentials, they're on solid ground when they tell you the attack is excluded by their policy.


They apparently have fake / security theater 2FA, where things are as inconvenient as 2FA, but pay pal explicitly doesn’t care that it’s easily bypassed, and full of security bypasses.

They also have opt-in 2FA.

It’s unclear which one the author bypassed.

Perhaps the confusion is by design on paypal’s side? Presumably giving people a false sense of security helps them close disputes without paying out?


I think the confusion results from attempting to encode "use good judgement" in formal language. I suspect the reason they talk about stolen accounts being out of scope is because someone bought a bunch of stolen accounts and then demanded a bounty.

Completeness or consistency (choose one)


Bypass means skipping steps on PayPal's side (like reading user data without a a password), not skipping steps on user side (stealing their password).


with the acquired information, the researcher has a few options:

- bulk acquire stolen credentials, bypass 2FA, bypass the security checks when sending money, and accumulate wealth

- sell above process to anyone that has an internet-connected device, the desire to accumulate wealth, and willingness to commit fraud (which I would guess is a non-trivial % of the world's population)

- disclose the vulnerabilities to paypal through any available channels

The fact that they went with the latter AND were punished for it doesn't shock you? Jesus.


Their "In-Scope Vulnerabilities" explicitly includes XSS exploits, though, which they also closed as Not Applicable (after patching the issue).

Tangentially, as a (former?) PayPal user, it's wild to see that they consider vulnerabilities involving stolen credentials as a non-issue. Why do they offer 2FA at all, then?

e: After taking another look at that massive Out-of-Scope list, I'm having a hard time imagining a bug that couldn't be closed as "Not Applicable." What a sham.


I can't really evaluate #5, but their screenshot undermines them -- it shows a chat session between the victim and "PayPal Virtual Agent", with the virtual agent offering some canned text.

If that's all you can do, then this is a self-XSS, which is excluded.

#6 is much more clear; that one's very obviously a self-XSS.


Their full "out of scope" list also includes MITM attacks, which would exclude most of what's in this article. I guess my confusion now is why PayPal even purports to offer bug bounties if they're going to craft an "out of scope" list that allows them to reject every submitted report.


> I guess my confusion now is why PayPal even purports to offer bug bounties if they're going to craft an "out of scope" list that allows them to reject every submitted report.

They're not; you're just choosing to assume bad things about them. Their out-of-scope list is fairly standard. If you asked a guy on the street "what would hacking PayPal look like?", the answer they imagined would probably be in scope.

For example, if I send you a link to my personal website, and when you visit the website your PayPal account automatically sends $500 to my PayPal account, that's in scope.


> automatically sends $500 to my PayPal account, that’s in scope.

Nope. From the out of scope list:

> Attacks involving payment fraud, theft, or malicious merchant accounts


You're misinterpreting it.


I'll happily admit that I have no experience with bug bounty programs. I'm just a heavy PayPal user who's shocked to learn that PayPal apparently doesn't care whether someone is able to bypass their security measures. Whether or not that's "standard" doesn't really change my reaction.


All the more reason to not submit bugs like this to HackerOne. If you can bypass 2FA by having only one factor then I wouldn't consider that 'stolen credentials' and more a singular stolen credential. Their system is designed to defend against this and it does so ineffectively. That is, by definiton, a security issue.

I wish I could define what is and isn't a bug in my code at work. My defect rate would be incredible.


The mitical “it’s not a bug, it’s a feature!”


but this is it:

"This happened even when the issue was eventually patched..." which, based on that, I understand their gripe here


That would be a valid complaint if their report had been closed Not Applicable on the grounds that the behavior didn't present a significant security risk. But it wasn't; it was closed Not Applicable on the grounds that it was ineligible for the program regardless of whether it was a security risk.


hmm yes, I see your point


The vulnerabilities they found allow bypassing the two factor auth so works with only part of the credentials


This is a bit tangential to the topic, but I find it immensely more interesting that after literal decades of utterly egregious abuses and downright evil behavior by PayPal, people still seem to be surprised by this type of behavior.

I find it so fascinating because it is a kind of manifestation of what is clearly a kind of mentality of abused people, the kind of people who usually others see as being trapped in a kind of inability to internalize the abuse being perpetrated against them, and therefore rationalize, excuse, ignore, etc. to simply push away and hide and suppress the clear abuses happening to them. It's just as sad as it is interesting to me because of the inherent illogical puzzle it represents, a puzzle that clearly has not yet been solved or for which there exists no easy and clean solution. How do you get someone out of an abusive relationship, be it a personal relationship or something like a formalized cult?

We are all abused by PayPal and other tech companies on a constant basis, yet all we do is lament the treatment, while simply just continuing on in the abusive relationship. Someone should tell PayPal, etc. "no, you are not allowed to abuse us anymore. We have human rights and your lies, deceit, abuse, manipulation, gaslighting, monopolization, etc are not going to be tolerated anymore." But I guess our other abusers in Congress get too much money and free meals out of it to change that.


I know of a really nice vulnerability but I know when to shut up. I almost scammed a legitimate business by mistake. Made sure to pay them with a normal bank transfer instead. Don't want to complain in case their account gets closed down or something. Not touching PayPal again.


I've seen several stories about how HackerOne doesn't pay out bug bounties when bugs are reported. I, for one, wouldn't submit bugs/PoC to them, and I would actively, publically, and immediately disclose bugs that affect anybody who is a client of HackerOne.


HackerOne, itself, is pretty generous about reported bugs. (As in, you reported an issue in the website hackerone.com.) They have to be, because their existence depends on everyone thinking bug bounty platforms are a good idea -- it's part of their way of encouraging people to hunt for bug bounties in general.

Payouts for bugs in other products are determined by those companies, not by H1.


The point of being a branded platform is that you take responsibility for the activity on your platform. Otherwise you are just an email gateway.


It is possible to escalate your dispute with a company to H1 itself. They'll review the report and the company's policy, and they may contact the triager or the company to try to resolve any questions.

I wouldn't do that as a regular thing; you're pretty well guaranteed to piss off everyone on the company's side of things.

I should note that I've personally seen probably in excess of $100,000 paid out through H1; the payouts do happen.


That sounds like it's a payout lottery. H1 can't force its customers to pay. It's acting as a go-between on behalf of its customer, the company offering the bounty, not as an neuteal arbiter when there is a dispute.

Perhaps I would take them seriously if there was an escrow account companies paid into and was released to the reporting party when a plurality of multiple, disinterested parties agreed that the report was valid.


HackerOne can force their customers to pay, that's the entire point of their "guaranteed bounty" program, that's it's a guaranteed bounty!

Even with a guaranteed bounty and a critical security vulnerability, HackerOne will punt the entire thing to one of their Portswigger groupies for collection and then won't disclose the details about the discovered flaw that supposedly they found prior to your submission.

Those guys are terrible, worthless product offering unless you are one of their clients getting free penetration testing and vulnerability analysis services.


Bah no they aren't, HackerOne has a small collective of security testers that they consistently make awards to, over and over again. If you submit a critical vulnerability, magically one of HackerOne's top ranked folk end up getting the award, AND HackerOne won't share any aspect of that triage information with you to actually prove that the vulnerability you submitted was legitimately discovered prior to your submission.

Junk company, waste of time and effort which results in all of their clients getting 95% free security analysis services.


> I would actively, publically, and immediately disclose bugs that affect anybody who is a client of HackerOne.

Sadly you can't feed your children from media drama.

Maybe, in the long run, but it's more likely to get sued.


> Sadly you can't feed your children from media drama.

By the way, if the problem is "how do I reliably get money from bug bounties" (as opposed to "I found a cool bug, what do I do with it") --

I strongly recommend finding a product with some kind of barrier to entry. Most researchers on these platforms are very low-effort. A gigantic, complicated product, like Workday, or even better a gigantic, complicated product that requires payment (!), like Slack for Enterprise, will usually not be getting very many reports. That product is hard to understand. But that means that -- once you've put in the effort to understand the product -- there's a lot more low-hanging fruit, and the company is likely to treat researchers better because of the lower report volume.


The market for a freelance security researcher out there is hard, no doubt, but disclosing bugs publically is an addition to your resume, akin to any other professional development you do. It demonstrates you can do the work and it shows the skills you have.

Suing someone for disclosing an actual bug is a long term losing proposition for any company in a competitive industry.


> but disclosing bugs publically is an addition to your resume

Request disclosure on hackerone then. Idk, breaking the law to get a job doesn't seem ok to me.


The screenshot in #2 does show the H1 Staff screwing up -- @cybernews requests disclosure and gets a response saying "you may request disclosure if you would like this reviewed, using the drop down menu" (which @cybernews has already done).

@cybernews' behavior in that thread isn't ideal, but they're more in the right than in the wrong on that one, judging by the screenshot.


I'm not talking about this case specifically.

At least Paypal was notified before the public disclosure!


Full disclosure isn't a crime in the United States, at least.


Hacking PayPal is a crime tho'.

Except for when you play their game, which means: submit bugs via h1 and only disclose if they allow.


Legitimately interested in your explanation as to how this specific research would be a crime absent contact with HackerOne. Please cite statute. I'm not saying you're wrong - simply asking you to back up your claim with evidence.


I'm sorry, won't do that, don't know why. I'm pretty sure there something like computer abuse act. If you don't follow their rules, how would it be legal to hack on their servers?


> Sadly you can't feed your children from media drama.

So it seems like the real answer in these cases is selling the exploit on the "dark web". I mean why not? The vendor doesn't seem to care about security anyway.


"Dark web" for things that are not relevant to Five Eyes and NSA when they are relevant. At least in those cases, with good opsec for the "dark web", you can be reasonably sure the company who made the product can't retaliate against you.


This is what I was thinking. The only stories I ever see about HackerOne are how horrible they are. As a non-sec dev, I only ever get the feeling that bounty hunting for profitability is the same as trying to sell something on eBay. You're eventually going to get scammed and you have to eat the loss.


i think you might want to take a breath, rethink that position and not let your anger cause you to do something stupid. if you disclose a vulnerability, the company HAS EVERY RIGHT to sue you. every security researcher _thinks_ that they are protected by some unwritten good Samaritan law, when in fact, you are hacking and that carries financial and criminal penalties. this is why these bug bounties and established ways of notifying the company of the vulnerabilities exists. you stepping outside of these established channels can be VERY costly. imagine in a moment of unclear thinking and childish behavior, you do something that could cost you your livelihood and financial well-being and also, maybe, get you thrown in jail.


> not let your anger cause you to do something stupid

Note: I didn't say that I would do this for every company. Just ones that use HackerOne. They have decided to abdicate their responsibility for their security vunerability reporting, and I feel completely justified in dumping info on their vulnerabilities.

Releasing the details of a vulnerability is not stupid. The users of the software/service deserve to know the data/service they're using is unsafe when a vendor refuses to act on a valid security issue

>If you disclose a vulnerability, the company HAS EVERY RIGHT to sue you.

You don't need the right to file a lawsuit to file a lawsuit. You just file the lawsuit. Now, you need an actual, actionable claim to prevail a a plaintiff in a lawsuit. Whether such a thing exists in practice is something we leave to lawyers to argue about and judges/juries to decide.

If your company is in a competitive industry and I release the details of a vunerability in your software and you sue me then that vulnerability and lawsuit becomes marketing item number one for all of your competitors.

>this is why these bug bounties and established ways of notifying the company of the vulnerabilities exists

Arguably why they exist. In reality, they tend to exist to give people an incentive to not dump the vuln details on the black market, embargo bugs so customers don't leave, and attempt to maintain a good relationship with security researchers. They do not grant immunity from being sued or somehow grant the legal right for security researchers to do their work as your comment seems to indicate.

Your post reads like propaganda from a bug bounty organization. I'm not saying that you're shilling, just that you're misinformed. In the US it is generally legal to conduct security research. In the US it is legal to communicate the results of that research publicly so long as you have not agreed in some contract to not do so.

Where did you get the idea that legitimate security research is a crime?


i'm not going to argue with you. if your actions and attitude get you in trouble, it won't affect me in the least nor do i care. so if you want to continue to be self-righteous and say and do stupid things, that's on you.


Insane comment. As a customer of these companies, this attitude is borderline criminal and a big cause of the repeated data breaches. Why should I trust any company that sues security researchers for disclosure?


"Why should I trust any company that sues security researchers for disclosure?"

i didn't say that, i said there are established channels for reporting such things and going outside those channels carries risks.

edit*

my bad... i read your comment wrong.

i could be wrong, but i think you meant to say "Why should I trust any company that sues security researchers for reporting a vulnerability?"

i agree that that would totally suck.


I say it all the time, there are no incentives or rules regarding cybersecurity standards, or companies have no obligations to follow them. The cost and risks of cybersecurity is pretty high, the public are always the first victims and pay the damage.

Cybersecurity always has been a national problem which should be solved by laws.

Insurance companies or banks should at least be encouraged to do more.

Cybersecurity shouldn't be improved with bug bounties.


Are hackerone analysts employees of the company? If so the conclusion drawn sounds like complete bs.

If the analysts are just other users, then it definitely sounds like there is a problem.


Bug triagers may be employees of HackerOne, employees of the company (e.g. Paypal here), or contractors indirectly working for the company (I worked in this role for a year). They're not going to be random other researchers.

The screenshots in this article show a "HackerOne Staff" stamp, so those triagers are employees of H1.


So it’s pretty far fetched that they would be purposely delaying reports so they could steal the rewards.


Hmmm.

> Other criticisms have pointed out that Security Analysts can first delay the reported vulnerability, report it themselves on a different bug bounty platform, collect the bounty (without disclosing it of course), and then closing the reported issue as Not Applicable, or perhaps Duplicate.

Time to response is tracked. Reports are timestamped. So delaying response to a report is a bad strategy -- reports are supposed to get precedence based on when they were filed, not based on when they were responded to, and the delay will be a black mark on your triaging record. This is the main objection to Cybernews' conclusion.

Similarly, I'd be a little surprised if one company had a presence on multiple bug bounty platforms. The standard flow is that you find a bug, look up the company, and report it to them using whatever they tell you is their standard. I've seen many reports including text like "Hello, I sent you this report by email, and I was told I should file it on HackerOne". (Including this text, if it's true, is a good idea for multiple reasons.) Centralizing reports makes many things more convenient -- including timestamp comparisons, but much more importantly it makes checking for earlier duplicate reports easier.

I'd also be a little surprised if HackerOne allowed their triagers to file reports for the same companies they do triage for. They hire triagers from the researcher pool, and they do allow triagers to hunt bounties on their own time, but it would be a common-sense protection to prohibit them from reporting to the same companies they triage for. I haven't worked directly for H1; I don't know whether they have such a policy or not.

In conclusion, there is some potential for abuse, but it's unlikely that a triager can abuse the system in the most obvious way, by personally stealing reports that come to them for triage. I'd worry more about a triager prioritizing their friend's report over a stranger's. I don't think triager abuse is a significant risk of reporting through H1. I don't know the details of what protections are in place.

later edit: on H1, each report has an ID number assigned in sequence -- if you dupe an earlier report to a later report, the researcher will definitely notice and complain.


not sure: https://www.hackerone.com/blog/Getting-to-know-the-HackerOne...

"When they aren’t triaging reports on our platform, they are spending time on their own bug bounty hunts."


The author might as well have sold the exploits for the best offer.

If a company advertises a bug bounty problem but fails to follow through, such company kinda deserves to be hacked. I mean, you are wasting people's time and still getting critical bug reports, probably along with a detailed write-up.

Also, we might also discuss about the fact that for a company that moves (and earns) so much money as PayPal, 30 kUSD is probably very little when compared to the possible outcomes of being hacked.


We run a private bug bounty (not via H1 but another platform), classical pentests, dynamic code assessment and a responsible disclosure program.

Pentests are ok, they help to scrap plenty of bugs. I am not a great fan otherwise because it is based on a fixed rate.

Private bb ended up fantastic. Great bugs, great researchers, reasonably good pay (not Apple grade but we paid some 30k€ irrc). Feedback from researchers was good, including unexpected public praise.

Dynamic code reviews are a mixed bag. Usually crap, sometimes hidden gems.

Responsible disclosure is a mixed bag too. It is very binary : 20% great from great researchers we usually invite to the private bounty afterwards, and 80% garbage. Oh man, the garbage. Often I do not even understand the submission (not being a native English speaker either).

One other problem with public programs are legal implications to pay an anonymous reporter (imagine a US company paying someone affiliated with NC or Iran or Daesh, and that info published in the press)


Given that there seems to be no consequence to getting hacked, I could see companies putting their bug bounties into a filibuster program. They can outsource their liability. I doubt the insurance companies who rate them care if the 3rd party bounty administrator is effective.


I have no experience, but it seems to me that a bug bounty program would be ripe for abuse by employees intercepting reports, feeding them to a partner hacker, and then splitting the bounty between themselves. What stops that from happening?


> Most ethical hackers will remember the 2013 case of Robert Kugler, the 17-year old German student who was shafted out of a huge bounty after he discovered a critical bug on PayPal’s site. Kugler notified PayPal of the vulnerability on May 19, but apparently PayPal told him that because he was under 18, he was ineligible for the Bug Bounty Program.

>But according to PayPal, the bug had already been discovered by someone else, but they also admitted that the young hacker was just too young.

Bad PR like this shows that bug bounty programs are probably more trouble than they’re worth.


None of this is surprising. I keep the lowest possible amount of money in paypal to cover eBay charges. Too many horror stories.


HackerOne is complete garbage. I spent close to a month digging into Uber and compromised their m.uber.com mobile endpoint; they hemmed and hawed and then awarded the $25K to another HackerOne top performer stating that he had discovered the exact same vulnerability the day before I had submitted the report.

What's weird about it is that I was using Burp Proxy for everything, and this guy was directly connected to PortSwigger (and Uber was running some promotional for a free three month license for Burp Proxy).

HackerOne completely sided with Uber on everything, gave the Portswigger kid $25K and that was that.

So, in summary: HackerOne is trash, and Burp Proxy may contain backdoor functionality which is relayed directly back to Portswigger whenever a high value critical vulnerability is discovered with it.


Hi, I work at PortSwigger.

> Uber was running some promotional for a free three month license for Burp Proxy

This is flat out wrong - the promotional partnership was done with HackerOne.

> What's weird about it is that I was using Burp Proxy for everything...

Burp Suite is used by tens of thousands of security experts and if we posted vulnerability data back we would get caught in about ten seconds. Also it would be stupid and illegal etc

Could you share the username of this 'Portswigger kid'? As far as I know I'm the only person here that does bug bounty hunting, and I've never received a 25k payout off Uber. So I'm wondering if this person is actually affiliated with PortSwigger at all.


Either Uber lied about this guy discovering the flaw so they didn't have to pay me, or Burp Proxy is sending telemetry back to Portswigger with high value vulnerabilities being discovered with the platform. I worked with nobody on this attack, I shared no information with anyone else, and submitted a remote execution vulnerability using HackerOne's supposedly secure triage system.

I wrote it all up on Medium, it got close to 400K reads over the 2018 Christmas holiday with many other stories in a similar vein related to incompetence in their security group. HackerOne is worthless, a scam unless you are full time working for them on bug bounties and already connected with their top ranked researchers.


The triage was escalated to Rob Fletcher and Uber's security liaison Lindsey Glovin. You're right, Portswigger was running a promo with HackerOne. After I submitted a couple of different vulnerabilities, they then locked all of my reports and gave the $23,000 bounty award to "shubs (notaffy)"

These were three critical vulnerabilities on the m.uber.com endpoint; I was able to bypass their WAF and XSS_Auditor protections followed by demonstrating reflected SSL'ized XSS under *.uber.com certificate and remote javascript execution capability.


Bah there are several closed source plugins for Burp Proxy that are binary only and which constantly relay telemetry data back to Portswigger. I stopped using it for this exact reason, due to Burp Proxy's constant communication back to Portswigger. And the only thing that would need to be relayed back to Portswigger would be high value vulnerabilities that have been discovered.

Which would be trivial to implement as a covert channel in Burp Proxy's update process or any one of another methods of obfuscating and tunneling that data back to Portswigger.


What would you expect HackerOne to do in the situation you describe? You filed a duplicate report. All of the malfeasance you allege is coming from Portswigger.


No idea which one it was, or both. 23K isn't something to sneeze at though, and would be plenty of incentive for the folk at Portswigger to work with douchebags like whoever this shubby dude is in order to collect these bounties.

24K for one bounty... or sell $299 licenses to nerds.. hmm, which one is more profitable...


...the second one is significantly more profitable.


Yeah I bet. It would be interesting to see how many U.S. DoD networks have been compromised with Burp Proxy.


If anyone in this thread is interested in talking about their experiences with HackerOne, please shoot me an email at david.morris@fortune.com.

Positive or negative. We can set up a time to talk, or if you're more comfortable, just include details in your email.


Off topic, but did anyone else think that the diagonal black line on the Share button on cybernews.com looks like a very thin hair on your screen? Almost feels like it was done on purpose


If a company or government ignores security vulnerability reports this way, then you publicize it,anonymously if necessary, through the press if necessary, to the bad guys if necessary.


My two last reports were closed as duplicate. I got some rep for one, and zero rep for the other. Both were real vulnerabilities. It is strange the reputation reward is not consistent.


According to HackerOne help page on reputation, it depends on the status of the vulnerability: yet undisclosed, not applicable, publicly known...


Been there, seen that. I find it hard to trust bug bounty platforms and their vulnerabilities as a researcher.


Interesting timing, a pro hackone article on slashdot posted hours after this post about hackerone issues.


None of the bugs is critical, not even medium severity.


I have heard enough. I am done using PayPal.


Hackers of the world, unite!


Hmm, they don't look that bad - https://hackerone.com/paypal

Here's an example of something that got paid out by paypal - https://hackerone.com/reports/739737 (15K)

Good writeup - https://medium.com/@alex.birsan/the-bug-that-exposed-your-pa...

Interesting history with paypal - https://hackerone.com/alexbirsan

Here's how duplicate reports are dealt with - https://docs.hackerone.com/programs/duplicate-reports.html

I am curious if paypal provided the OP with original reports. They don't say. I wonder how much the OP is not saying here, versus how much they understand the platform they are working on.

This statement makes me very curious: "Other criticisms have pointed out that Security Analysts can first delay the reported vulnerability, report it themselves on a different bug bounty platform, collect the bounty (without disclosing it of course), and then closing the reported issue as Not Applicable, or perhaps Duplicate."

How can you do that if you're providing the original report?

Also, the guy is just wrong. You GAIN rep points for duplicates, unless you did something dumb and really amateur like not searching first for already publicly disclosed issues.

https://docs.hackerone.com/hackers/reputation.html#effects-o...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: