That's not true because there is an economic cost for most people to committing crimes. "Hey you could make more money selling that on the black market" is not going to convince me to sell something on the black market.
Bounty programs are very much not trying to compete with crime.
The reputation angle shouldn't be dismissed: Google paying so little for this bug is the whole reason this article stays on the top page and gets so much discussion.
I don't know how much it should be worth, but at least there's a PR effect and it's also a message towards the dev community.
I see it the same way ridiculously low penalty for massive data breaches taught us how much privacy is actually valued.
If Google doesn't have the best reputation of any large tech company for security, it's in the top 3. This is not the nightmare scenario for Google that people think it is. It's a large payout for this bug class, so, if anything, what we're doing here is advertising for them.
I'm in all agreement (genuinely thankful for the context you brought on the difference in market values for this category of bugs), which is also part of why it's sobering privacy bugs have such a low valuation and this is set as a high payout.
For security researchers it's apparently obvious, but from the outside it's another nail in the coffin of how we want to think about user data (especially creators, many being at the front line of abuse already). As you point out Google here is only the messenger, but we'll still remember the face that delivered the bitter pill for better and worse.
It is a factor though. Most people will commit non-violent crime for a big enough pay off. Especially one where the individuals effected are hard to identify.
If my bug bounty is $10,000 and I can sell it for $20,000 then most people will take the legitimate cash. If it's $10,000 and some black market trader will pay $10,000,000 (obviously exaggerating) then there's a whole mess of people are going to take the ten million.
Except it's not "legitimate cash" and that's the point.
* Are you talking to someone legitimately interested in purchasing and paying you, or is this a sting?
* If you're meeting up with someone in person, what is the risk that the person will bring payment or try to attack you?
* If you're meeting with someone in person, how do you use $20k in cash without attracting suspicion? How much time will that take?
* If it's digital, is the person paying you or are the funds being used to pay you clean or the subject of an active investigation? What records are there? If this person is busted soon will you be charged with a crime?
There are a lot of unknowns and a lot of risks, and most people would gladly take a clean $10k they can immediately put in the bank and spend anywhere over the hassle.
It's not a crime to sell a bug. You can sell something like this to Crowdfense and receive money wired from the company (or cryptocurrency if you prefer anonymity).
It is not intrinsically a crime to sell a bug, but if you sell a bug and it can be demonstrated you reasonably knew the buyer was going to use it to commit a crime, you will end up with accessory liability to that crime. Selling vulnerabilities is not risk-free.
This is another reason why the distinction between well-worn markets (like Chrome RCEs) and ad-hoc markets is so important; there's a huge amount of plausible deniability built into the existing markets. Most sellers aren't selling to the ultimate users of the vulnerabilities, but to brokers. There aren't brokers for these Youtube vulnerabilities.
Say more. What do you mean by "platform exploit", and which brokers are you talking about? I am immediately skeptical, but it should be easy to knock me down on this.
Most people have an intuitive sense to ask themselves questions like "If I do this, will someone be harmed, who, how much harm, what kind of harm, etc.", that factors into moral decisions.
Almost everyone, even people without a moral sense, have a self-preservation sense- "How likely is it that I will get caught? If I get caught, will I get punished? How bad will the punishment be?" and these factor into a personal risk decision. Laws, among having other purposes, are a convenient way to inform people ahead of time of the risks, in hopes of deterring undesirable behavior.
But most people aren't sociopaths and while they might make fuzzy moral decisions about low-harm low-risk activities, they will shy away from high-harm or high-risk activities, either out of moral sense or self preservation sense or both.
"Stealing from rich companies" is a just a cope. In the case of an exploit against a large company, real innocent people can be harmed, even severely. Exposing whistleblowers or dissidents has even resulted in death.
> Most people have an intuitive sense to ask themselves questions like "If I do this, will someone be harmed
How much time do you spend asking yourself whether your paycheck is coming from a source that causes harm? Or whether the code you have written will be used directly or indirectly to cause harm? Pretty much everyone in tech is responsible for great harm by this logic.
I wish developers (and their companies, tooling, industry, etc.) creating such flaws in the first place would treat the craft with a higher degree of diligence. It bothers me that someone didn't maintain the segregation between display name / global identifier (in YouTube frontend*) or global identifier / email address (in the older product), or was in a position to maintain the code without understanding the importance of that intended barrier.
If users knew what a mess most software these days looks like under the hood (especially with regard to privacy) I think they'd be a lot less comfortable using it. I'm encouraged by some of the efforts that are making an impact (e.g. advances in memory safety).
(*Seems like it wouldn't have been as big a deal if the architecture at Google relied more heavily on product-encapsulated account identifiers instead of global ones)
> Bounty programs are very much not trying to compete with crime.
Nor did my post posit this.
Bounty programs should pay a substantial fraction of the downside saved by eliminating the bug, because A) this gives an appropriate incentive for effort and motivate the economically correct amount of outside research, and B) this will feel fair and make people more likely to do what you consider the right thing, which is less likely if people feel mistreated.
Should this be true only for vulns, or all bugs? If I as a third party find a bug that is causing Google to undercharge on ads by a fraction, should Google be obligated to pay me a mountain of cash?
Is there any evidence that OP feels that this payout was unfair?
> If I as a third party find a bug that is causing Google to undercharge on ads by a fraction, should Google be obligated to pay me a mountain of cash?
No, but Google should understand that if they give a token payment, people will be less likely to help in future situations like this. And might be inclined to just instead tell ad buyers about the loophole quietly.
How do you propose to calculate "the downside saved by eliminating the bug" - ideally in general, but I'd be curious to see if you could do it even for the specific bug discussed in this article.
Prominent youtuber doxxed and killed; terrible press extended for an extended period by litigation. 1 in 5000 but very high cost.
Large scale data leak and need for data leak disclosure. 1 in 3, moderate cost.
Bug report saving engineering time by giving clear report of issue instead of having to dig through telemetry and figure out misuse and then identify what is going on, extents of past damage, etc. 3 in 4.
You think that being able to get someone's email address (most likely a business email but let's pretend it's a personal email) has a 1 in 5,000 chance of being turned into enough personal information to track down AND that someone would use it to kill someone?
Millions of usernames and emails are leaked every month; if this was the case you'd be seeing these murders in the news every week.
> Millions of usernames and emails are leaked every month; if this was the case you'd be seeing these murders in the news every week.
Yes, because all possible scenarios kill the same fraction of people-- whether we're talking about getting a dump of a million email addresses or giving some nutjob a chance to unmask people he doesn't like online.
Bounty programs are very much not trying to compete with crime.