Hacker News new | past | comments | ask | show | jobs | submit login

Lots of comments on this thread about bounty payouts. If a tech giant with a standing bounty program isn't paying a bounty, the odds are very strong that there's a good reason for that. All of the incentives for these programs are to award bounties to legitimate submissions. This is a rare case where incentives actually align pretty nicely: companies stand up bounty programs to incentivize specific kinds of research; not paying out legitimate bounties works against that goal. Nobody on the vendor side is spending their own money. The sums involved are not meaningful to the company. Generally, the team members running the program are actually incentivized to pay out more bounties, not less.



No, it's because Apple's 'product security' team that investigates and pays out bug bounties is horribly mismanaged and ineffective. It was recently moved from the SWE program office to SEAR (security engineering & arch), and the manager was recently shown the door and went to AirBNB. The team members are mostly new college grads (ICT2's and 3's) who wouldn't pass a coding interview elsewhere in the company, and mostly function as bug triagers. They spend more time going to conferences and hanging out with hackers, than in front of a computer screen working. Their portal of 'open investigations' shows a graph that only goes up (aka they only get more swamped with emails and don't even try to catch up).

Shaming Ivan, the head of SEAR, on Twitter is how people who should get paid bounties, but aren't, make progress.


I have no idea about how well the bounty program at Apple is managed, so, without affirming this, I acknowledge this is another plausible explanation: it's just an understaffed team that needs to get its act together.

The only crusade I'm on is against the idea that companies ruthlessly avoid paying bounties, which is, on information and belief, flatly false, like, the opposite of the truth. I think it's valuable for people to get an intuition for that.

Thanks for this!


Honestly, Apple is a 3.5 trillion dollar company. If the bug bounty program is understaffed then it's an intentional choice and they should fix it. And I say that as someone who's generally sympathetic to Apple.


Sure. My comment isn't really about Apple specifically so much as bounty program misconceptions generally.


I think suspicion of bug bounties even from organizations who would clearly benefit the nost from doing them right are well founded and you are over simplifying the situation.

Every organization includes a mess of situations where the overall best interest of the organization no longer comes through. Groups and individuals don't want to admit mistakes both personal and in wider senses and have alliances, competitions, team and organizational loyalty that twists their behavior.

A lot of organizations know they would benefit from having a proper whistle blower program and then proceed to crucify the first person who uses it.


Bug bounty programs aren't whistleblower programs.


AP is saying they can suffer from the same corporate politics.


That doesn't make sense, because bounty programs can't punish vulnerability researchers other than not awarding bounties, and whistleblower programs can punish whistleblowers. I got what that comment was trying to say, but, no.


It becomes corporate politics when 'blame' is assigned to the team responsible for the bug.


The preceding comment, I could follow. This one I cannot. But I think we're doing the same thing that's happening all over this thread, and trying to axiomatically derive how these programs work. I'm not doing that; I (like a lot of people) have direct knowledge of them. It's not much of a secret.


Huh? Whistleblower programs exist to defend them and fail to combat the problem, one that directly punishes would be like a bounty program that actually crafts the legal threats to security researchers.


That is being done too. Teenagers showing vulnerabilities in school systems have been prosecuted in Sweden... Needless to say, they didn't get much help with looking for holes after that so who knows how many security holes they still have.


> the idea that companies ruthlessly avoid paying bounties, which is, on information and belief, flatly false

Eh, it's likely usually true, but I've worked for a company which was attracted to the bounty program idea mainly for the optics and very much did push back on/was very reluctant to pay out on bounties.

And when I say "for the optics" I mean not only for the company being able to boast about having a bounty program but also the executive in question having something for his quarterly report. Having it not be too expensive was definitely part of the deal.

Needless to say this was a terrible company with terrible leadership, but it's a data point...


Ok but not a company as reputable as Apple, yes?

Apple historically used to have a deservedly good reputation for this. I was quite shocked at this story.


> not a company as reputable as Apple, yes?

Definitely not, in fact rather the opposite. I was just sharing the anecdote as a counter to the otherwise fairly blanket claims being made upstream.


> Apple historically used to have a deservedly good reputation for this.

Are they? Apple only started their bug bounty program (with monetary rewards) merely 5 years ago, 12 years after first iOS release and well after everyone else. They are not very transparent about bugs and payouts (which is understandable) so I wonder where this good reputation comes from?

(if you count their invitation-only program then it started in 2016, 8 years ago)


That's the same thing.


Obviously no.


Getting paid to fuck off at conferences and hang out with hackers on the company dime instead of staring at a screen in a cubicle all day sounds pretty awesome. Do I detect some jealousy or resentment that you haven't mastered the art of the corporate grift?


This is like every software security team of every form in the whole industry. Sometimes it's real, sometimes it's not, but it's evergreen problem.


Similar problem when if you're an innocent software engineer who introduces a bug, the security people will find it, make up a fancy website and logo for it, go around giving conference talks about it, get bounties (or not), give each other prizes, post on Mastodon about it from their accounts with cool hacker nicknames, presumably go have Vegas orgies, etc. Nobody's doing that for you.

I think they could use a little more ritualized shaming: https://en.wikipedia.org/wiki/Leveling_mechanism

Only Linus is brave enough to do this.


that's the thing though: security teams composed of grizzled talent absolutely benefit from going to conferences. they bring back what they've learned and leverage their new connections to bring more value to the company. so now you've got this industry-wide norm that the security guys are kind of out of pocket and spend a bunch of time at conferences, but they know their shit and protect the infra so it's all good. if it worked at the last X companies $CISO worked for so they're going to be hesitant to drop the hammer on the netsec team networking.


practice of the art of the corporate grift does take a toll on one's soul. Usually only pyscho/sociapath can do master this and do it for a long time without any emotional/mental consequences.


You might be right - maybe Apple's poorly operated bug bounty program is a result of incompetence rather than intentional malice.

But does that matter to security researchers or the public? No. Apple should fix their bounty program regardless of the reason it's broken.

Ultimately, this blog post is just another example on the already large pile[1][2][3][4][5]

1: https://arstechnica.com/information-technology/2021/09/three...

2: https://mjtsai.com/blog/2021/07/13/more-trouble-with-the-app...

3: https://medium.com/macoclock/apple-security-bounty-a-persona...

4: https://theevilbit.github.io/posts/experiences_with_asb/

5: https://shail-official.medium.com/accessing-apples-internal-...


Until we get to the total market dynamics (ie, the idea that "black markets" are an immediate substitute for bounty programs) I don't have a dog in this hunt or any reason to litigate the importance of changing how this particular program is managed. If it can be managed more effectively to the benefit of researchers without breaking internal incentives for the bounty program, I'm all for it.

I'd be rueful about leaving so many holes in my original argument, but I think these are useful conversations to have. Thanks!


Unless the implication is that the author of this point is misrepresenting things, I'm struggling to think of what "very good reason" there could be when there's a clear record of someone reporting a bug well before it's fixed. At best, it seems like typical slow bureaucracy, which I don't think is a particularly good reason. There's no reason it should take over a year for someone to approve something like this if the company actually incentivized it. Your logic might be sound, but it's hard for me to look at a situation like this and think "company is either stingy or overly bureaucratic like companies overwhelmingly tend to be in almost every other circumstance" is less likely than "company has legitimate reason not to pay out a bounty that ostensibly has been fulfilled". It just seems way more plausible that the incentives that happen pretty much everywhere else have bled into this domain, assuming the author is accurately describing the events.


Vulnerability researchers misapprehend the dynamics of bug bounty programs all. the. time. and are virtually never doing that in bad faith. I don't need to determine which of these two entities are above board; I presume they both are.

If you think that any major vendor bug bounty has incentives to stiff researchers, I'm commenting to tell you that's a strong sign you should dig deeper into the dynamics of bounty programs. They do not have those incentives.


Other than bad press there's no immediate incentive for the company to avoid stiffing researchers. Bug bounty programs work if the company is vulnerable to bad press and it would actually impact their bottom line.

This is not from an examination of when bug programs work but when they have very demonstrably not worked in the past.


Press is a perfect example of incentive alignment in these programs, since not paying a bounty a researcher believes is deserved is practically a guarantee of an uncharitable blog post.


Which process ensures that the company should actually care in the slightest about an uncharitable blog post or two, especially when its motivations are opaque enough that the lack of payment might be chalked up to "there's a good reason for that"?

If the cost of an uncharitable blog post is less than the cost of paying out the bounty, then a company would still be incentivized to find as many reasons to reject a payout as possible, as long as future reporters still believe they have a good chance of receiving a payout (e.g., if they believe they can sideskirt any rejection reasons).


The cost of an uncharitable blog post is massively more than the price of a bounty, like, it's not even close. The cost of an uncharitable blog post is potentially unbounded (as in: not many people in a large tech company would know how to put a ceiling on the cost), and the cost of a bounty, even a high one, is more or less chump change.

Another in my long-running dramatic series "businesses pay spectacularly more for determinism and predictability than nerds like us account for".


> The cost of an uncharitable blog post is potentially unbounded (as in: not many people in a large tech company would know how to put a ceiling on the cost), and the cost of a bounty, even a high one, is more or less chump change.

Look up "apple bug bounty" on Google, or any other search engine of your choice, and you'll find absolutely no shortage of people complaining of issues with the program. If these complaints each cost Apple a bajillion dollars, then why haven't they shut down their program already?

Or, if almost all of those complaints are just from the reporter being dumb, then how are potential future reporters (who would care about the company's prospenity to pay) supposed to find actual meaningful complaints among the noise?

I don't think that sporadic blog posts are nearly as powerful as you're making them out to me: my intuition tells me that the company can usually ignore them safely, short of them making front-page news.


Look, I believe you, but people complain about all these bounty programs, some of which I know to have been extraordinarily well managed, and usually when you get to the bottom of those complaints it comes down to a misapprehension the researchers have about what the bounty program is doing and what its internal constraints are. I acknowledge that another possibility is that the bounty program itself isn't performing well; that is a possibility (I have no actual knowledge about this particular case!)

The only thing here I'm going to push back on, and forcefully, is the idea that bounty programs have an incentive to stiff researchers. They do not. I cannot emphasize enough how "not real money" these sums are. Bounty program operators, the people staffing these programs, don't get measured on how few bounties they pay out.


My point is that while the sums might be "not real money", the costs of stiffing researchers is even moreso "not real money", so that it makes sense on the margin to do it, whenever the situation isn't incredibly clear-cut.

After all, it's not like Apple goes around handing out free iPhones on the street, even though a few thousand units are similarly "not real money". Businesses care about small effects on the margin.


No, I don't think this logic holds, at all.


Which part does not follow? Even supposing that the members of Apple's bug bounty team are all well-meaning, but that the program itself is chronically mismanaged, one might conjecture that Apple is disincentivized from investing in making the program better-managed.


I'm not deriving this axiomatically. The bounty programs I'm familiar with incentivize their teams to grant more bounties. I don't have recent specific knowledge of how Apple's program works. Obviously, Apple is more fussy than other programs! They want very specific things. But a just-so story that posits Apple's bounty incentives are just wildly different than the rest of the industry isn't going to get you and I anywhere. It's fine that we disagree. I do not believe Apple ruthlessly denies bounty payouts, and further think that claims they do are pretty wild.

(I have no opinions in either direction about whether Apple is denying bounty payments because of difficulties operating the program!)


Perhaps I've been somewhat too harsh: I don't see any particular 'ruthlessness' in Apple's actions. But I do think that its program, as well as many other bug bounty programs, can easily end up more byzantine in their rules than they'd otherwise be, since there's not much incentive counteracting such fussiness.

After all, one might easily imagine a forgiving rule of "we'll pay some amount of money (whether large or small) for any security issue we actively fix based on the information in the report", and yet Apple seemingly chooses to be more fussy than that in this case, unless they're just being extremely slow. I just don't see any way to square such apparent fussiness with your experience of bug bounty programs leaning toward paying out more.


> I'm going to push back on, and forcefully, is the idea that bounty programs have an incentive to stiff researchers. They do not

I replied upstream as well, but let me push back here as well. They can actually, if the bounty program is being run for the wrong reasons, which can happen - I know anecdotes aren't data, but I've seen one case first-hand.

If a bounty program is treated as a marketing project and/or an "executive value" project then they can and will be managed as a cost center and those costs will be deliberately minimized. Bang for buck. Now obviously this is perverse but if making your manager happy isn't an incentive then I don't know what to tell you.


Companies are not set up to accurately and effectively gauge the impact of intangible costs to themselves.


Exactly, which is why intangible costs will tend to be overpriced compared to risks with low cost ceilings, like "paying out an extra bounty".


I think both the point you’re making and the idea you’re arguing against ascribe a level of agency and rationality to large organizations that doesn’t reflect their reality. In that way they’re both “not even wrong.”

But then I can see your point to a degree at least.


I want to say again that I'm not making this point by way of a first-principles derivation of what's going on. I know for a fact that the norm in large bounty programs is to incentivize payouts. I don't know that for sure about Apple's program, but it seems extraordinarily unlikely that they depart from this norm, given the care and ceremony with which they rolled this out (much later than other big tech firms).

None of this is to say that the program is managed perfectly, as has been pointed out elsewhere on the thread. I'm not qualified to have a take on that question.


The bounty cost is not relevant for the company, but what about admitting liability?


Not a real concern.


Could you explain this in more detail?


The cost here is Apple changing their processes which is exceptionally painful for them


What processes would those be, and do you have actual knowledge of them?


The processes that involve interacting with external parties, which has long been something Apple has been really bad at.


Maybe not “immediate” but withholding rewards results in fewer researchers participating in bounty programs which defeats the purpose.


Not if the (true) purpose of having the bounty program is simply PR, rather than an honest desire to find and fix bugs.


The true purpose of these programs is to direct research to specific threats and engineering areas.


Have you ever reported security and privacy issues to Apple? I have. In fact, I have more than one incident open with them right now. One of them could be fixed in one line of code with no adverse consequences. It’s been open for two years. Apple’s Security team is either highly disinterested or highly incompetent. I don’t care which, neither is good.

It’s one of the most infuriating and frustrating experiences I ever had in computing. They clearly don’t want you sharing the issue publicly, but just string you along indefinitely. I’m honestly reaching my limit.

I don’t even care about the bounty money, I just want the bugs fixed. I’d give them all the latitude in the world if I thought the matters were taken seriously, but I don’t believe they are.


Not saying any about Apples bug bounty program, i manage my companies bug bounty program and for every good submission we get about 10 from India where they xss themselves in web browser console or similar hard to read texts that lead to nothing.

And now we starting to get a lot of AI generated submitted stuff. Take a lot of effort just sort trough the bullshit to accept the good ones, and then to manage it and fix things within SLA when not critical is very easy it gets pushed very down the backlog, competing with all different kind of request from customers to fix things. Code changes might be a one liner but testing etc can blow up stuff to be a very long process.


Yes.

See the rest of the thread for a further response on this, esp. w/r/t Apple itself.


There have been many documented cases where tech giants have outright refused to pay out, employing practices like: changing the rules of engagement post-factum, silently banning security researchers from active bounties, escalating good-faith disclosures to law enforcement, extreme pettiness from managers, etc.

> The sums involved are not meaningful to the company

Which makes it the more bewildering to see how mishappen the handling is


Give me an example of a good-faith disclosure escalated to law enforcement? Some examples come to mind, but the ones I'm thinking of won't support your argument.


I'm sorry tptacet, some examples come to mind?

I was really expecting you to say this doesn't happen, I'm now left wondering why security researcher's are willing to take such risks.


You are generally not going to be legally liable for things you do in ordinary security research, but you will sure as hell be liable if you do unauthorized serverside research. Apple bounty stories are invariably about clientside work with little to no legal risk.


> the odds are very strong that there's a good reason for that

The easiest way to show this would be to give the responsibility of managing the bug bounty to a third party who isn't involved in the business.


What I haven't had time to learn more about is when bounties are a such a tiny drop in the bucket for such an enormous number of users and revenue, how is it not a win-win?


With tech giants there's really no win win, only 1 win. They win either way. So why bother?


They don't win when an important-sized customer cares, especially when they're government-sized and can regulate you.


Which is why everyone who has received a large bounty payout from Google or Apple has worked for a government-sized entity.


It is a win-win.


I'm betting Hanlon's razor (= incompetence) helps with divining the reasons of the tech giant in question here.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: