In large production environments it's almost impossible to avoid bugs - and some of them are going to be nasty. What sets great and security conscious companies apart from the rest is how they deal with them.
This is an examplary response from google. They respond promptly (with humor no less) and thank the guys that found the bug. Then they proceeded to pay out a bounty of $10.000.
I am really glad about how they responded. Whenever Tinfoil has found vulnerabilities in companies like United Airlines[0], for example, those companies mostly respond with anger rather than graciousness.
Exactly. I just saw that the local bank my parents use is still vulnerable to the Heartbleed Bug. But you know what? I don't want to go down there and talk to them because I'm quite certain they'll call the police because I "hacked into their systems".
There should be a security equivalent to hiring a lawyer to write strongly-worded letters for you.
Maybe someone could set up a firm where individuals could hand them a vuln report, and then the firm would contact the vulnerable company on the individual's behalf. The firm would do the long, boring dance of "we suspect you're vulnerable to X, though we haven't tested it, but we'd like to do a free vulnerability test on you, so please sign this liability waiver", both protecting the individual from liability, and taking time the individual doesn't have. In return, if the company gives rewards, the firm could take a percentage.
So you pay money to hire somebody to send a company a letter informing the company of the companies problem in hopes that maybe, just maybe, the company will reward the the firm a small sum of money and you will get a small amount back.
I might be living in a country with very few banks (3). I may benefit from letting them know about a security issue, especially if because of that issue I could potentially go to jail
I may not have the option of changing bank because the others are even worse.
however I don't know how much I would pay for that. Probably some kind of class action would work.
That's besides the point. It still costs money, and the company that's vulnerable is not the one paying it. A service like this would be time consuming (bogus reports, etc), and the EFF would still have to use money from donations to finance this.
The only thing I can think about is some security firm doing this, using the exposure as a marketing tool and establish them as an authority on the subject.
> I just saw that the local bank my parents use is still vulnerable to the Heartbleed Bug.
Just remember, many sites use the old certificate expiration even though they generated new certificates which shows up as a false positive on the checking tools.
If you are a bank, and you haven't fix one of the worst and widest reaching security holes in years by now.. well. Criminal negligence would be an appropriate description.
While I know plenty of companies do not respond how I feel they should to vulnerabilities, reading that story I don't see any cited anger from United Airlines.
You're right; the anger was mostly behind the scenes. It turns out it's also /incredibly/ hard to disclose a vulnerability to most companies. Companies like Google or that have bug bounty / disclosure programs are to be lauded. :)
Hmm a pretty cheap road trip for just ten dollars, and I'm also not sure why they thought it necessary to include an extra significant figure for cents.
A few webcrawlers[1] out there follow HTTP redirect headers and ignore the change in schemas (this method is different of OP's but achieves the same goal).
So anyone can create a trap link such as
<a href="file:///etc/passwd">gold</a>
Or
<a href="trap.html">trap</a>
once trap.html is requested the server issues a header "Location: file:///etc/passwd"
Then it's just a matter of seat and wait for the result to show up wherever that spider shows its indexed results.
... And this is why you want to discontinue products and services your engineers can't be motivated to maintain. Amazing.
This should scare anyone who has ever left an old side project running; I could see a lot of companies doing a product/service portfolio review based on this as a case study.
Protect your clients by fixing the product even if the engineers don't care much anymore, not by suddenly discontinuing something your clients came to depend upon, without even giving them alternatives.
I'm quite happy with App Engine for unmaintained side projects. Very few upgrades are needed and your crufty code is quite well encapsulated. For something like the heartbleed bug there's nothing to do.
They also discovered vulnerabilities in many big websites (dropbox, facebook, mega, ...). Their blog also has many great write-ups : http://blog.detectify.com/
Sorry to hear that you are disappointed. Few results can be a good thing though, it might just mean that your site has very few issues! Feel free to mail us again and thanks for the feedback :)
Er...CTO of Tinfoil here. We respect the Detectify guys a lot. Not sure what you were trying to get at, but there's no conspiracy here. We don't engage in subversive competitive tactics.
This is another reason not to use XML, plain and simple
It's too much hidden power in the hands of those who don't know what they're doing (loading external entities pointed in an XML automatically? what kind of joke is that?)
YAML and XML seem too powerful and too complex for their own common use cases (data storage). Markdown too - how many Markdown parsers allow for strict parsing against an HTML whitelist, and don't allow native HTML at all by default?
> loading external entities pointed in an XML automatically? what kind of joke is that?
Your browser does much the same when parsing (X)HTML. LaTeX naturally includes ‘external’ resources when building an output file. There are tons of examples like that, loading external entities per se is not wrong, it’s mostly just wrong under these specific circumstances.
I think the important difference here is that with browsers, the behavior is well-known and well-understood, there are a very small number of them, and you're unlikely to run one in a production environment -- barring, say, something like PhantomJS, which still has all the foregoing in its favor.
This compared to XML parsers, for which there are often multiple per language, each of which may be implemented to wildly different levels of sophistication re: security.
My point was that it is not an unreasonable thing to have some sort of #include directive in a data format, and certainly not in a markup language.
The problem here was the same as in the rest of the software industry: programmers are far from ‘engineers’ in their desire to understand their tools, use the right tools and build bug-free code. Instead, most people hack for fun with tools they hardly understand and then somehow manage to complain if they shoot off their feet while doing so.
Hacking for fun and shooting off extremities is of course perfectly fine, but the blame for the latter lies in the programmer (and possibly their education), not the tools.
XML made it for more manageable to create machine to machine API's. I can say we surely would not want go back to the 80's and 90's when dong that stuff was a nightmare.
Yes, it was a drunken, stumbling step forward. Let's take another one, and move to something simpler, which solves the problem better.
To quote Phil Wadler's paper about XML, where he established some of the principles that influenced Xquery: "So the essence of XML is this: the problem it solves is not hard, and it does not solve the problem well."[1]
I suggest reading the entire paper; It shows a number of shortcomings, but it's also rather enlightening about how XML actually is structured, and how its semantics are defined. (ie, in spite of that quote, it's not just XML bashing)
Hm. In his introduction, he says, "XML is touted as an external format for representing data." To me that mostly misses the value of XML. I think of it as an interchange format, not a closely-mirror-my-datastructures format. I've used it before when I want a long-lived data format that is mostly annotated text, and I'd happily do it again.
That said, I'm very skeptical of the XML-for-everything school, and nearly murdered a group of engineers who were using XML to transfer data from one spot in an app to another, even though it all ran in the same JVM. So maybe I'm more defending a small subset of XML rather than the XML-industrial complex.
That's an overly narrow view. We shouldn't avoid powerful features merely because power can cause problems.
Where would we be if web browsers couldn't use external resources?
General-purpose parsers/renderers need have tightly locked down, sensible defaults, or even security-oriented feature subsets, but that doesn't mean we should remove one of their most useful features altogether, or avoid them because they're powerful and dangerous.
I tried them on a client project today, and they found some (minor) form post issues. the scan used roughly 1.5hr, which results in "incurred cost" of $4.75 (for their cloud ressources needed), and they suggest a 5x "gratitude" factor which I gladly paid.
So they tell you how much they spent on cloud resources and even suggest a gratitude factor? That is actually a great way of getting paid (enough) even if you do not enforce any fixed pricing, cool! :)
Just wanted to point out the hilarity of your typo: "we're not very nice when..." vs "were not very nice when..." completely reversed what you meant to say ;P.
Interesting to see this hit big companies like google. The problem, I think, stems from the idea that most people treat XML parsers as a "black box" and don't enquire too closely as to all the functionality that they support.
Reading the spec. which led to the implementations, can often reveal interesting things, like support for external entities..
I would say the flaw is that XML parsers will try to resolve external entities on their own, by resolving file paths or whatever. They shouldn't do this by default: they should instead take a programmer-supplied entity resolver and call into that.
They could also provide a canned resolver which hits the local filesystem and/or the web, which programmers could supply if they wanted, but this should not be a default. The programmer should have to explicitly specify that access.
I've had related problems where XML parsers would try to go off and fetch DTDs from the web, then fail, because they were running on firewalled machines that couldn't see the servers hosting the DTDs. That took us by surprise. We installed an entity resolver that looked in a local cache of DTDs instead, which was fairly easy. But i would prefer not to have been surprised.
Also, all this stuff should be running in a jail where it can't even see any interesting files, of course.
> They shouldn't do this by default: they should instead take a programmer-supplied entity resolver and call into that.
Then the programmers would write their own resolvers with even more bugs most probably. You would have 10 000 broken implementations of that code, half of them copied from stackoverflow example with security left as exercise for reader.
Also horrible defaults in XML parsers. That any XML parsers allow retrieval of DTD's without explicit options specifying allowed sources etc. is beyond me. It's not just local file access, which becomes a security hole when you let users pass you XML files, though that is one of the worst ones.
But the number of times I've seen production apps that turn out to behind the scenes request DTD's or schemas from remote servers regularly have made that one of the first thing I check if I am tasked to maintain or look into anything that parses XML. Often these apps stop working or slow down for seemingly no reason because the DTD or schema becomes unavailable, and nobody understands why.
The crazy part about this is that I remember having these conversations over a decade ago and it was very clearly recognized as a major security, reliability and performance problem but the greater XML community basically just shrugged it off.
One really interesting aspect of this is that many applications suddenly broke when the Republicans shut down the government last year because a number of XML schemas are managed by government agencies who were suddenly legally unable to provide their normal web services:
Makes me wonder whether it's time to start contributing patches to disable bad ideas like this by default — some places are clearly paying a significant amount to serve content nobody should need: http://www.w3.org/blog/systeam/2008/02/08/w3c_s_excessive_dt...
It's bad practice to fetch an external DTD on a server you don't control, first for security reasons, second because your application then depends on something that can go away anytime, third because it's rude to the third party.
twic is right that one should always use entity resolvers that point to local ressources and that parsers should run in a sandbox without external access.
He's also right to say that by default parsers shouldn't go fetch external resources; I think the reason is historical; entity resolvers appeared later than the parsers themselves.
I agree, this can be summarised as "abstraction hides bugs". I believe that although abstraction is a powerful tool, there is such a thing as too much of it, and when reading an XML document can cause access to other files, maybe even across the network, perhaps things have gone a little too far. This isn't like an obvious #include or @import, it's much more subtle.
When I first noticed that HTML doctypes have URLs in them, I inquisitively tried accessing them, and it brought up a lot of questions in my mind about why it was designed that way, what would happen if the URLs no longer existed, etc. Such an explicit external dependency just didn't feel right to me. Unfortunately most people either don't notice or seem to ignore these things...
XML legitimately scares me. The number of scary, twisted things it can do make me shudder every time I write code to parse some XML from anywhere - it just feels like a giant timebomb waiting to happen.
Ironically, using an existing parser is what opens you to this vulnerability in the first place. If you hack your own together based on a vague idea of what XML really is, you're very unlikely to "correctly" handle entities, you'll probably just put in enough to handle simple XHTML entities, and that makes you immune to this problem! It's the compliant parsers that are vulnerable to this....
Or, if you use existing parsers in a language like Haskell, you know parsing is supposed to be a pure function. If parsing suddenly requires IO effects, you can be suspicious and try to figure out what is going on.
We're not talking about a malicious XML library here, though. We're talking about a misunderstanding regarding what happens during legitimate parsing of XML.
A) Legitimate libraries don't (unless the IO action is in fact pure)
B) Rogue libraries that do this will not generally work: laziness, optimizations, RTS races can all make the IO action run 0..N times, arbitrarily.
C) It doesn't change the fact that in Haskell, the XML library exposes the weird XML behavior of looking up external entities by being in IO (my original point) -- because of A.
I wrote a libxml2 binding in Haskell (http://hackage.haskell.org/package/libxml-sax). It was an absolute nightmare, in part because handling entities safely requires a lot of hoop-jumping (and I'm not even 100% I caught all the places libxml2 does unsafe stuff).
Okay, parent comment obviously came out wrong and is starting its descent into white hell... ;-) I'm not going to delete it since it would be unfair to the child comments.
XML is for some reason a super-controversial technology that is apparently almost universally hated, and XSLT even more so. I hope I'll not be downvoted even more by asking what's scary about being downstream from a (serious, well-maintained) XML parser?
What's "scary" (not the term I would personally use) is that the libraries typically aren't safe by default against malicious use. Users of the library have to know a lot in order to make them safe. See https://bitbucket.org/tiran/defusedxml for some of the potentially nasty gotchas in XML and XML-related technologies. Quoting from it:
> None of the issues is new. They have been known for a long time. Billion laughs was first reported in 2003. Nevertheless some XML libraries and applications are still vulnerable and even heavy users of XML are surprised by these features. It's hard to say whom to blame for the situation. It's too short sighted to shift all blame on XML parsers and XML libraries for using insecure default settings. After all they properly implement XML specifications. Application developers must not rely that a library is always configured for security and potential harmful data by default.
I think cheald probably means writing code to invoke a parser to parse XML. Presumably if you had written your own parser (generally, not a great idea) the resulting behaviour would not be "scary, twisted"... [at least to the person writing the parser].
Is there a startup that can help automate custom attacks on websites? Like guide the webmaster to look for holes in their setup. I'm guessing some security expert can do a good job educating new businesses on how to prepare for the big bad world.
I think you just proved that writing an excellent blog post like you did is an amazing way to get new customers!! Maybe make it a tad more explicit in the post (or page) what detectify do. I personally had no idea.. but I just checked the homepage because I liked the design and was curious, and it's only then I realized what you guys were doing.
These payouts are for product vulnerabilities; things that Microsoft and Google ship to customers; vulnerabilities that those vendors are effectively creating on hundreds of thousands of machines they don't own.
I'm surprised nobody has mentioned containers, e.g. Docker, as a way of limiting the damage from this kind of bug. In a container whose only purpose is to run the application, /etc/passwd should be as uninteresting as:
I think they couldn’t read /etc/shadow, so it’s not that bad at first. But then they could surely access some configuration file of the application itself, probably containing DB creds and of course more information which helps to find more vulns.
It's shocking to me that baking "db creds" into a binary or configuration file is still so common that anyone would expect it to be true on a randomly selected server. Is this still the industry standard?
Exactly the same as #includes and #defines in C - they let you organize your code in multiple files, be more concise, and shoot yourself in the foot, repeatedly.
They were useful for document editing use cases - remember this was before SOAP and xml serialization, and sgml tooling that already supported this stuff existed. You can see the record of the decision here:
http://www.w3.org/XML/9712-reports.html#ID5
The source is not generally accessible from prod servers - only binaries and supporting data, and only the ones running on that computer.
I guess it's possible you could find a computer that hosted both search and the codebase. But, since search is for external and the codebase is for internal, I'd be that they don't share clusters.
what if that file is per container and every software runs isolated? it's still a potential issue because you could retrieve other sensible information (log files?).
Yes golly you must be right, i bet google just uses rsync to copy the passwords of billions of user accounts to thousands upon thousands of machines all over the world, in plain text! It's probably in /var/lib/every_user_password_in_plain_text.txt!
This isn't a root exploit. It serves up files that are readable by the serving process, such as /etc/passwd. You are aware, I hope because it's been this way for 20+ years, that despite the name there are no passwords in /etc/passwd, right? It's not considered a sensitive file.
% ls -l /etc/passwd
-rw-r--r-- 1 root root 2028 Dec 2 13:05 /etc/passwd
I'm kind of sad that this is a throwaway account because you're posting good responses, that are technically competent and are actually specific to the bug discussed in the article, to people who are either less informed or are talking about their vague general understanding of vulnerabilities rather than reading the article and actually discussing its contents.
Your posts are exactly the kind of thing I _want_ to read on HN. Is there a particular reason why you feel you can't post this under a general-use account?
Uhh, you do realize his 'throwaway' account is two years old with hundreds of comments? I don't know if he's partitioning, hoarding, or being playful in account naming, but that's probably a better track record than most non-throwaway accounts on this site.
To clarify what thrownaway2424 said, in case some people really are unfamiliar;
You can't take the password out of RAM. It would be pretty insane to store it in RAM once the login process is done.
This exploit can't read RAM. Being able to read RAM from a program other than the one you are exploiting is pretty unusual today (early operating systems were much less scrupulous, however). There are lots of scary local exploits that can do this by abusing the high level of privileges granted to drivers for things like HDMI devices, but I've never heard of a remote exploit that could read arbitrary RAM. You can sometimes convince a program to dump core if you have a DoS and can run ulimit.
We used to store passwords in /etc/passwd. The user database need to be public. So the passwords stored in it were hashed, and thus were thought to be secure. Along came the Morris worm, which used (among other things) MD4 password cracking to infect systems. I imagine there were less high-profile incidents as well, but the long and the short if it is we how use /etc/shadow for secrets, and /etc/passwd for usernames.
While not secret, I'd certainly call /etc/shadow sensitive, but its a small point.
It's probably cryptographically hashed. There is no reason to keep a raw password in RAM beyond the stack frame of the function that receives it from the client - at any point after that, just store & compare the hash.
It would still be catastrophic if they had access to the hashed passwords of a big number of users. People use weak passwords and they get cracked in no time if you have just a hash.
But as I said before, that also depends on some details about the setup that we don't know from this article alone.
SELinux. This kind of stuff would be where it really shines. A correctly configured installation would block and report access to files the application is not supposed to access. Maintaining it, especially for individual applications, is work, but it seems to me that on the scale of Google it may well be worthwhile.
Well done. I had to deal with some similar issues with my own project, and they weren't legacy code either. This should push me to go through some of my code again.
This sells for at least 10 times more on the black market. Why would one rationally chose to "sell" this to google instead of the black market.
Some people don't break the law because they are afraid to get caught, but I like to believe that most people don't break the law because of the moral aspect. To me at least, selling this on the black market poses no moral questions, so, leaving aside "I'm afraid to get caught", why would one not sell this on the black market? Simple economic analysis.
That vulnerability does not sell for 10x on "the black market".
* It fits into nobody's existing operational framework (no crime syndicate has a UI with a button labeled "read files off Google's prod servers")
* A single patch run by a single organization kills it entirely
* The odds of anyone, having extended access and pivoted into Google's data center, keeping that access is zero.
I'm not an authority on how much the black market values dumb web vulnerabilities but my guess on a black market price tag for this bug is "significantly less than Google paid".
Later: I asked a friend. "An XXE in a single property? Worthless. And at Google? Worth money to Google. Worth nothing to anybody else."
Exactly. Unless this could somehow be pivoted into write access, with the ability to modify server responses to clients (for phishing or installing malware), no black hat would care about this.
No, they can't. Read the inverse of my bulleted list to see what makes money:
* Bugs that fit readily into operational frameworks (ie: it would be reasonable to have a UI with a button invoking that bug and/or any of the 15 other bugs like it)
* Bugs that can't be killed with a single patch cycle by a single entity
* Bugs that provide long-term access, or access that is unlikely to get your entire syndicate caught
Example of a potentially lucrative web bug: bug in Wordpress.
Example of a bug unlikely to be lucrative: "read any Facebook server file".
I know that sounds crazy and backwards, but I don't think it is.
If you want to look at it rationally you have to factor in the risks you are taking by selling it on the black market. These risk include:
- How will you whitewash the money? Alternatively how will you spend them on the black market? You can't buy houses, cars or stocks with black money.
- Will you get paid? - Secure anonymous payments that are guaranteed are not trivial. I don't know if there are escrow services for the black market, but this is definitely risky. We are talking about shady actors after all.
- Will you get caught? If do you will probably end up in prison.
When you take the above in to consideration I think most people would prefer $10.000 legitimate US dollars without risk to $100.000 that might end up giving you ten years behind bars.
How would you escrow it so that you can be sure to actually get the funds? Sure they're not going to pay up front and it would be over-trusting to give a crack away on the promise of later funds, so ...
Your word is incredibly important for criminal enterprises. If you fuck someone over and somebody finds out, nobody will ever do business with you again (besides the whole 'getting shot' thing). Escrow services (by way of a middle-man you both trust) are only necessary for really big jobs. In general you pay first and get your goods once payment is confirmed.
I can see that working in meatspace but here we're talking about selling an idea on the web - the buyer is very unlikely to be able to track you so they're unlikely to front the money.
Suppose you found a bug, couldn't cash it in with Google because of where you live and so were selling it on. The buyer won't release the funds, would you really give up the goods? Even with an escrow, proving the transfer and performing the transaction with minimum risk seems problematic to me.
- the buyer is very unlikely to be able to track you
the buyer will probably be easily able to track you, if they are paying 100k for hacks on the black market, they would have the resources to find you easily
Yet they're getting the cracks from you .. which suggests you're good enough to be able to hide yourself away. Use anonymising proxies to connect to a machine that you Tor off to a BTC wallet that only takes in washed coins, or something. Even being able to spend 100k on [potential?] server cracks doesn't seem enough resources to be able to take down Tor?
If they try and trace you just send a spike!!1111one
No, indeed I wasn't pitching that as a problem with BTC - just in general how can you ensure a secret transaction will go through. You'd need a trusted escrow, a trusted escrow would probably need to have a business address [and other things] for you to trust them ... but that means they'd be registered to handle money in all likelihood and that means records of your transaction that law enforcement could eventually get hold of?
Anyone who sells on the black market already knows the answers to these. Malware, botnet and black market security researchers also know all the answers to these. Let's just say that in general, it is actually trivial to launder money from black market transactions, as long as you don't get the attention of the feds and you stick to non-US markets.
They made $10k plus a huge amount of free advertisement for their company and services (security). I reckon this release alone will earn them far more than your estimated $90k difference.
Mind you, your point is certainly valid if this were a random hacker type.
If you donate to charity, Google will match your donation. You can buy a smile on your face for the rest of your life, knowing your exploit build a school in Africa.
If you manage to sell this on the black market, that money is worth half when turned into "legit" money that you can spend. If we leave aside "I'm afraid to get caught" do we mean "caught by the justice system"? What would happen if you sell your exploit to some cybermob and a few days later, some monkey on a typewriter, finds your exact exploit and publishes it online? Not your problem it is worthless now and some mob feels you sold them crappy gear?
As for the moral aspect. Think of anyone you hold in high regard, or have a loving relationship with. Selling an exploit that will be used for harm, might mean harm to those you hold dear.
Then there is this simmering thing in your subconsciousness. Some know how to put out that fire. Others wake up in a sweat years later, after a dream where their exploit is used to find and execute a political dissident. That is: You may very well come to regret a "bad" deed in the future, when your situations and responsibilities change. You won't lie on your death bed and think: "I wish I hadn't build that school, but taken the money and put a down-payment on my new bathroom."
You have a very strange sense of morality, IMO. I refrain to inflict damage to others for personal gain, it's really that simple to me. Other questions are complicated and conflicting, but this is quite clear cut to me.
I agree with you however companies are completely the void of morality their only purpose is profit and they will hire shady lawyers to interpret the law in their favor fire people without giving it a second thought or collude with other big companies to keep their employees wages low so why would i treat them differently.
In business morality is a luxury that some companies can't afford and most choose not to have so it shouldn't be expected.
The only thing preventing you from selling it on the black market is the potential fame and business you may get by being able to reveal your find which may or may not be worth it.
That 10k is not really much of an incentive from a business perspective.
companies are completely the void of morality their only purpose is profit
Companies are groups of people and have many different purposes. I understand being worried about the rise in corporate oligarchy, but your argument is itself the attitude you are accusing companies of. The problem isn't companies being immoral, but people rationalising behaviour that they know to be immoral.
That's probably because i treat them the same way they treat me.
Their attitude makes sense and sometimes it's actually necessary for a companies/entities survival.
We all face hard choices between what's moral and what's best for our own survival the only difference is companies put any amount of small profit over morality not just survival.
>I agree with you however companies are completely the void of morality their only purpose is profit and they will hire shady lawyers to interpret the law in their favor fire people without giving it a second thought or collude with other big companies to keep their employees wages low so why would i treat them differently.
Perhaps, but even so, when you sell a vulnerability to the "black market" you don't just harm Google. You also harm people the vulnerability will be used against (to fish their credit card details, compromise their servers, etc).
(Perhaps in this case, for technical reasons you can only harm Google with this thing, not sure. But still, talking in general).
I don't agree with you that "selling this on the black market poses no moral questions"; this gives access to Google's production servers, which can really harm Google in very bad ways. Unless Google has done specific very bad things to you and you want retribution, why would you do that to them?
But I agree with you that $10,000 doesn't sound like much, for such an exploit, and for a company like Google.
1. Because you'll be dealing with organized criminals, which is dangerous and brings problems beyond the mere possibility of getting caught.
2. I'm assuming your basis for "no moral questions" is because you'd be hurting Google, which is a corporation, not a human, and can therefore be treated with a different set of moral values. (If this assumption is incorrect you need to clarify.) However, selling this exploit on the black market may very well be leveraged to affect a lot more people than just Google. People that will be phished, scammed and extorted. That (I hope) does pose moral questions, doesn't it?
The problem is, you can't sell an exploit on the black market on the condition that it may only be used to (say) "steal from the rich and incorporated".
3. Finally, $100k earned on the black market is not worth the same as if it was legitimate, because it is very hard to spend. I can imagine that a process of white-washing could easily knock 50% off the value, as well as taking a lot of time and effort. Then you got $50k, which is already a lot closer to $10k.
> To me at least, selling this on the black market poses no moral questions
That's probably a reflection of your own morals. There are millions of people that could be affected by this bug, so I'm not sure how there isn't a moral question here.
I fear that your economic analysis is way too simple.
You should include damage to the company's reputation, should this get leaked. Specially since they work with security - and who would trust their security to people who sell vulnerabilities to the highest bidder?
Maybe this weird and obsolete service was run on a small subset of servers that is not really worth that much. I would assume your journey would end up right there at that one (or n of the same) machine.
You are a scumbag, but the math is right. You would need to discover 10 of these a year to make a living wage in SF - maybe 50 if you are a team of 5. They should pay what they pay their engineers.
Maybe I just read it wrong but it sounds like Google made an opening offer and the security group felt it was sufficient and decided to take it instead of negotiating. Maybe I'm wrong and they'd already given the details and Google was just trying to keep them happy and provide some cash for what otherwise would've been a Good Samaritan, open-source contributor type of report.
As long as Google is willing to negotiate, I don't see a problem with a group being satisfied with 10k and taking it.
Bounties are always awarded after the bug is disclosed[1].
We constantly[2] upgrade the bounties whenever we feel like we should be paying more, and we will continue to do so. We also increase the rewards from the amounts in the price list if we think they result in a higher impact than what the reporter originally suspected.
We aren't actually trying to out-pay the black market. Overall, our goal is to reward the security community for their time and help for their security research, since we both have the same goal in common of keeping all of us safe (either Google services, or open source/popular software[3]).
This is an examplary response from google. They respond promptly (with humor no less) and thank the guys that found the bug. Then they proceeded to pay out a bounty of $10.000.
Well done google.