Hacker News new | past | comments | ask | show | jobs | submit login
How we got read access on Google’s production servers (detectify.com)
1156 points by detectify on April 11, 2014 | hide | past | favorite | 192 comments



In large production environments it's almost impossible to avoid bugs - and some of them are going to be nasty. What sets great and security conscious companies apart from the rest is how they deal with them.

This is an examplary response from google. They respond promptly (with humor no less) and thank the guys that found the bug. Then they proceeded to pay out a bounty of $10.000.

Well done google.


Indeed, it's funny that I'm reading about a vulnerability they had and it's actually making me feel more safe about using their products.


I am really glad about how they responded. Whenever Tinfoil has found vulnerabilities in companies like United Airlines[0], for example, those companies mostly respond with anger rather than graciousness.

[0] https://www.tinfoilsecurity.com/blog/132969897


Exactly. I just saw that the local bank my parents use is still vulnerable to the Heartbleed Bug. But you know what? I don't want to go down there and talk to them because I'm quite certain they'll call the police because I "hacked into their systems".


There should be a security equivalent to hiring a lawyer to write strongly-worded letters for you.

Maybe someone could set up a firm where individuals could hand them a vuln report, and then the firm would contact the vulnerable company on the individual's behalf. The firm would do the long, boring dance of "we suspect you're vulnerable to X, though we haven't tested it, but we'd like to do a free vulnerability test on you, so please sign this liability waiver", both protecting the individual from liability, and taking time the individual doesn't have. In return, if the company gives rewards, the firm could take a percentage.


So you pay money to hire somebody to send a company a letter informing the company of the companies problem in hopes that maybe, just maybe, the company will reward the the firm a small sum of money and you will get a small amount back.

I think you have a winner on your hands.


I might be living in a country with very few banks (3). I may benefit from letting them know about a security issue, especially if because of that issue I could potentially go to jail

I may not have the option of changing bank because the others are even worse.

however I don't know how much I would pay for that. Probably some kind of class action would work.


They wouldn't be doing it for the money. The EFF would be a good example of a firm that could take this practice up.


That's besides the point. It still costs money, and the company that's vulnerable is not the one paying it. A service like this would be time consuming (bogus reports, etc), and the EFF would still have to use money from donations to finance this.

The only thing I can think about is some security firm doing this, using the exposure as a marketing tool and establish them as an authority on the subject.


> I just saw that the local bank my parents use is still vulnerable to the Heartbleed Bug.

Just remember, many sites use the old certificate expiration even though they generated new certificates which shows up as a false positive on the checking tools.


One idea: Call your local newspaper with an anonymous tip?


To be fair, there are some that respond more graciously than others, but it's entirely unclear.


If you are a bank, and you haven't fix one of the worst and widest reaching security holes in years by now.. well. Criminal negligence would be an appropriate description.


That's what pastebin is for.


While I know plenty of companies do not respond how I feel they should to vulnerabilities, reading that story I don't see any cited anger from United Airlines.

Am I missing part of the story?


You're right; the anger was mostly behind the scenes. It turns out it's also /incredibly/ hard to disclose a vulnerability to most companies. Companies like Google or that have bug bounty / disclosure programs are to be lauded. :)


Call me an idealist, but I think 10,000 could be low.


Where _is_ Google's response?


if you read the article, it summarizes it.


Oh -- that video was part of Googles response? I thought it was part some meme to describe Googles response.


That's how I read it, yes.


Meme is a common way of communication in Google, even formally.


Thats fascinating, do you happen to remember the reference?


Hmm a pretty cheap road trip for just ten dollars, and I'm also not sure why they thought it necessary to include an extra significant figure for cents.


Some countries reverse the role of period and comma in numbers. The author meant ten thousand.


I'll admit, it threw me off at first too.


Comma is actually used for decimal separator in more of the world than the period: http://en.wikipedia.org/wiki/File:DecimalSeparator.svg


A few webcrawlers[1] out there follow HTTP redirect headers and ignore the change in schemas (this method is different of OP's but achieves the same goal).

So anyone can create a trap link such as

    <a href="file:///etc/passwd">gold</a>
Or

   <a href="trap.html">trap</a> 
once trap.html is requested the server issues a header "Location: file:///etc/passwd"

Then it's just a matter of seat and wait for the result to show up wherever that spider shows its indexed results.

[1] https://github.com/scrapy/scrapy/issues/457


... And this is why you want to discontinue products and services your engineers can't be motivated to maintain. Amazing.

This should scare anyone who has ever left an old side project running; I could see a lot of companies doing a product/service portfolio review based on this as a case study.


But I heard that discontinuing unstaffed projects was evil?


I recently deleted a decade old CGI script called "db" that would execute arbitrary database queries for you. No one remembered it was there.


Or just move it to some cheap VPS where it cannot damage other services or your infrastructure.


Or your reputation or your ethical and possibly legal duty to protect your clients?


Compartmentalization is part of that.


Protect your clients by fixing the product even if the engineers don't care much anymore, not by suddenly discontinuing something your clients came to depend upon, without even giving them alternatives.


Most of the time projects are not neatly encapsulated like that.


Shouldn't they be there in the first place?

Even better, host on your competitor's servers.


Reputation is a form of infrastructure.


I'm quite happy with App Engine for unmaintained side projects. Very few upgrades are needed and your crufty code is quite well encapsulated. For something like the heartbleed bug there's nothing to do.


Or have policy of replying with humor and bounties. So the rest of the world happily finds your vulnerabilities for you.

Works for small, old, etc products as the value of breech will probably be less than value of bounty + cred.



I hope it doesn't get unnoticed that the guys who discovered this vulnerability created a really great product, Detectify :

https://detectify.com/

They also discovered vulnerabilities in many big websites (dropbox, facebook, mega, ...). Their blog also has many great write-ups : http://blog.detectify.com/


While they are probably good at doing this manually, their automated tool finds very little. And they were kind of assholes on support :(


Sorry to hear that you are disappointed. Few results can be a good thing though, it might just mean that your site has very few issues! Feel free to mail us again and thanks for the feedback :)


Odd, I had the opposite experience. They seemed fine to me.


Nice try, Tinfoil.


Er...CTO of Tinfoil here. We respect the Detectify guys a lot. Not sure what you were trying to get at, but there's no conspiracy here. We don't engage in subversive competitive tactics.


This is another reason not to use XML, plain and simple

It's too much hidden power in the hands of those who don't know what they're doing (loading external entities pointed in an XML automatically? what kind of joke is that?)


Sure, but didn't YAML in Rails do mostly the same type of thing? It's not just XML that is dumb like this.


YAML and XML seem too powerful and too complex for their own common use cases (data storage). Markdown too - how many Markdown parsers allow for strict parsing against an HTML whitelist, and don't allow native HTML at all by default?


I've never even thought of that. Wow. Obvious now of course.



> loading external entities pointed in an XML automatically? what kind of joke is that?

Your browser does much the same when parsing (X)HTML. LaTeX naturally includes ‘external’ resources when building an output file. There are tons of examples like that, loading external entities per se is not wrong, it’s mostly just wrong under these specific circumstances.


I think the important difference here is that with browsers, the behavior is well-known and well-understood, there are a very small number of them, and you're unlikely to run one in a production environment -- barring, say, something like PhantomJS, which still has all the foregoing in its favor.

This compared to XML parsers, for which there are often multiple per language, each of which may be implemented to wildly different levels of sophistication re: security.


My point was that it is not an unreasonable thing to have some sort of #include directive in a data format, and certainly not in a markup language.

The problem here was the same as in the rest of the software industry: programmers are far from ‘engineers’ in their desire to understand their tools, use the right tools and build bug-free code. Instead, most people hack for fun with tools they hardly understand and then somehow manage to complain if they shoot off their feet while doing so.

Hacking for fun and shooting off extremities is of course perfectly fine, but the blame for the latter lies in the programmer (and possibly their education), not the tools.


XML made it for more manageable to create machine to machine API's. I can say we surely would not want go back to the 80's and 90's when dong that stuff was a nightmare.


Yes, it was a drunken, stumbling step forward. Let's take another one, and move to something simpler, which solves the problem better.

To quote Phil Wadler's paper about XML, where he established some of the principles that influenced Xquery: "So the essence of XML is this: the problem it solves is not hard, and it does not solve the problem well."[1]

I suggest reading the entire paper; It shows a number of shortcomings, but it's also rather enlightening about how XML actually is structured, and how its semantics are defined. (ie, in spite of that quote, it's not just XML bashing)

[1]http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.109...


Hm. In his introduction, he says, "XML is touted as an external format for representing data." To me that mostly misses the value of XML. I think of it as an interchange format, not a closely-mirror-my-datastructures format. I've used it before when I want a long-lived data format that is mostly annotated text, and I'd happily do it again.

That said, I'm very skeptical of the XML-for-everything school, and nearly murdered a group of engineers who were using XML to transfer data from one spot in an app to another, even though it all ran in the same JVM. So maybe I'm more defending a small subset of XML rather than the XML-industrial complex.


How about protocol buffers?


I give to you that it's better than CORBA

What I don't agree is that it allows a "load this" where this can be a local file, an url in some cases, anything basically


That's an overly narrow view. We shouldn't avoid powerful features merely because power can cause problems.

Where would we be if web browsers couldn't use external resources?

General-purpose parsers/renderers need have tightly locked down, sensible defaults, or even security-oriented feature subsets, but that doesn't mean we should remove one of their most useful features altogether, or avoid them because they're powerful and dangerous.


There is a big difference between a web browser in your local machine and a server processing all untrusted data that is thrown at him


scnr

    XML - It seemed like a good idea at the time


Not to everyone. Some of us greybeards tried to warn against it :

"XML is simply lisp done wrong." — Alan Cox

but the gee-whizzery won.

"XML combines the efficiency of text files with the readability of binary files" — unknown

"XML is a classic political compromise: it balances the needs of man and machine by being equally unreadable to both." — Matthew Might

Anyone remember XHTML ?


The guys behind this report have an interesting pricing model: Pay what you want!

https://detectify.com/pricing

The pricing models has apparently worked so far. Are any active users of Detectify here and can share their experience?


I tried them on a client project today, and they found some (minor) form post issues. the scan used roughly 1.5hr, which results in "incurred cost" of $4.75 (for their cloud ressources needed), and they suggest a 5x "gratitude" factor which I gladly paid.


So they tell you how much they spent on cloud resources and even suggest a gratitude factor? That is actually a great way of getting paid (enough) even if you do not enforce any fixed pricing, cool! :)


Very interesting in deed. There is a auto driver in India (a taxi like vehicle) who uses the same pricing model for his services - http://www.thebetterindia.com/4813/tbi-heroes-ahmedabad-no-r...

Nice to know about such things :-)


I like the price but they found nothing, honestly, and we're not very nice when I emailed them for support.


Just wanted to point out the hilarity of your typo: "we're not very nice when..." vs "were not very nice when..." completely reversed what you meant to say ;P.


Interesting to see this hit big companies like google. The problem, I think, stems from the idea that most people treat XML parsers as a "black box" and don't enquire too closely as to all the functionality that they support.

Reading the spec. which led to the implementations, can often reveal interesting things, like support for external entities..


I would say the flaw is that XML parsers will try to resolve external entities on their own, by resolving file paths or whatever. They shouldn't do this by default: they should instead take a programmer-supplied entity resolver and call into that.

They could also provide a canned resolver which hits the local filesystem and/or the web, which programmers could supply if they wanted, but this should not be a default. The programmer should have to explicitly specify that access.

I've had related problems where XML parsers would try to go off and fetch DTDs from the web, then fail, because they were running on firewalled machines that couldn't see the servers hosting the DTDs. That took us by surprise. We installed an entity resolver that looked in a local cache of DTDs instead, which was fairly easy. But i would prefer not to have been surprised.

Also, all this stuff should be running in a jail where it can't even see any interesting files, of course.


> They shouldn't do this by default: they should instead take a programmer-supplied entity resolver and call into that.

Then the programmers would write their own resolvers with even more bugs most probably. You would have 10 000 broken implementations of that code, half of them copied from stackoverflow example with security left as exercise for reader.


You could have a default implementation that callers have to set, eg:

    xmlSetFileResolver (xml, xmlDefaultFileResolver);
Callers could provide their own, but most will use none or use the supplied default.

Of course nothing helps for people who code by copying and pasting, rather than understanding what the API or library does.


Also horrible defaults in XML parsers. That any XML parsers allow retrieval of DTD's without explicit options specifying allowed sources etc. is beyond me. It's not just local file access, which becomes a security hole when you let users pass you XML files, though that is one of the worst ones.

But the number of times I've seen production apps that turn out to behind the scenes request DTD's or schemas from remote servers regularly have made that one of the first thing I check if I am tasked to maintain or look into anything that parses XML. Often these apps stop working or slow down for seemingly no reason because the DTD or schema becomes unavailable, and nobody understands why.


The crazy part about this is that I remember having these conversations over a decade ago and it was very clearly recognized as a major security, reliability and performance problem but the greater XML community basically just shrugged it off.

One really interesting aspect of this is that many applications suddenly broke when the Republicans shut down the government last year because a number of XML schemas are managed by government agencies who were suddenly legally unable to provide their normal web services:

http://gis.stackexchange.com/a/73777 http://forums.arcgis.com/threads/94294-Expected-DTD-markup-w... http://www.catalogingrules.com/?p=77

Makes me wonder whether it's time to start contributing patches to disable bad ideas like this by default — some places are clearly paying a significant amount to serve content nobody should need: http://www.w3.org/blog/systeam/2008/02/08/w3c_s_excessive_dt...


It's bad practice to fetch an external DTD on a server you don't control, first for security reasons, second because your application then depends on something that can go away anytime, third because it's rude to the third party.

twic is right that one should always use entity resolvers that point to local ressources and that parsers should run in a sandbox without external access.

He's also right to say that by default parsers shouldn't go fetch external resources; I think the reason is historical; entity resolvers appeared later than the parsers themselves.


It is bad practise but you know that it is uncannily common?

Just remember that the W3C had to impose download restrictions on the (X)HTML DTDs (http://www.w3.org/Help/Webmaster#block)


I agree, this can be summarised as "abstraction hides bugs". I believe that although abstraction is a powerful tool, there is such a thing as too much of it, and when reading an XML document can cause access to other files, maybe even across the network, perhaps things have gone a little too far. This isn't like an obvious #include or @import, it's much more subtle.

When I first noticed that HTML doctypes have URLs in them, I inquisitively tried accessing them, and it brought up a lot of questions in my mind about why it was designed that way, what would happen if the URLs no longer existed, etc. Such an explicit external dependency just didn't feel right to me. Unfortunately most people either don't notice or seem to ignore these things...

Interestingly enough, not all XML parsers support external entities; the first one to come to mind is this: http://tibleiz.net/asm-xml/introduction.html


They are supposed to be identifiers and not resolved. But using http for something not to be resolved is odd...


XML legitimately scares me. The number of scary, twisted things it can do make me shudder every time I write code to parse some XML from anywhere - it just feels like a giant timebomb waiting to happen.


> every time I write code to parse some XML

Why would you write code to parse XML?

Use an existing parser to parse.

Use XSLT to modify/transform (including generate JSON/CSV/other).


Ironically, using an existing parser is what opens you to this vulnerability in the first place. If you hack your own together based on a vague idea of what XML really is, you're very unlikely to "correctly" handle entities, you'll probably just put in enough to handle simple XHTML entities, and that makes you immune to this problem! It's the compliant parsers that are vulnerable to this....


Or, if you use existing parsers in a language like Haskell, you know parsing is supposed to be a pure function. If parsing suddenly requires IO effects, you can be suspicious and try to figure out what is going on.


Even with haskell, someone could sneak in a performUnsafeIO call if you aren't careful. Of course this is trivial to detect with compiler flags etc.


We're not talking about a malicious XML library here, though. We're talking about a misunderstanding regarding what happens during legitimate parsing of XML.


I was just responding to you about pure functions. You can make a Haskell function with a pure type signature that includes a call to unsafePerformIO.


You can, but:

A) Legitimate libraries don't (unless the IO action is in fact pure)

B) Rogue libraries that do this will not generally work: laziness, optimizations, RTS races can all make the IO action run 0..N times, arbitrarily.

C) It doesn't change the fact that in Haskell, the XML library exposes the weird XML behavior of looking up external entities by being in IO (my original point) -- because of A.


More likely, they'll just write bindings to libxml2.


I wrote a libxml2 binding in Haskell (http://hackage.haskell.org/package/libxml-sax). It was an absolute nightmare, in part because handling entities safely requires a lot of hoop-jumping (and I'm not even 100% I caught all the places libxml2 does unsafe stuff).


"absolute nightmare" sounds like you did pretty well for libxml2.


Okay, parent comment obviously came out wrong and is starting its descent into white hell... ;-) I'm not going to delete it since it would be unfair to the child comments.

XML is for some reason a super-controversial technology that is apparently almost universally hated, and XSLT even more so. I hope I'll not be downvoted even more by asking what's scary about being downstream from a (serious, well-maintained) XML parser?

(And I love XSLT. What can I say.)


What's "scary" (not the term I would personally use) is that the libraries typically aren't safe by default against malicious use. Users of the library have to know a lot in order to make them safe. See https://bitbucket.org/tiran/defusedxml for some of the potentially nasty gotchas in XML and XML-related technologies. Quoting from it:

> None of the issues is new. They have been known for a long time. Billion laughs was first reported in 2003. Nevertheless some XML libraries and applications are still vulnerable and even heavy users of XML are surprised by these features. It's hard to say whom to blame for the situation. It's too short sighted to shift all blame on XML parsers and XML libraries for using insecure default settings. After all they properly implement XML specifications. Application developers must not rely that a library is always configured for security and potential harmful data by default.


I think cheald probably means writing code to invoke a parser to parse XML. Presumably if you had written your own parser (generally, not a great idea) the resulting behaviour would not be "scary, twisted"... [at least to the person writing the parser].


Yes, indeed. :)


He very clearly said 'write code to parse' not 'write a parser'. The former obviously USES a parser.


take away: XML should not be used (at least as user input). It is too powerful, too big. It is much too hard and expensive to test and validate.

Input from potentially malicious users should be in the simplest, least powerful of formats. No logic, no programability, strictly data.

I'm putting "using XML for user input" in same bucket as "rolling your own crypto/security system". That is you're gonna do it wrong, so don't do it.


Offtopic: the reply was generated with Google's internal meme generator, i read about it here : https://plus.google.com/+ColinMcMillen/posts/D7gfxe4bU7o

Actually digged it when i read it a few years ago and awesome knowing that it was probably used for this reply :)


A job well done. This is actually impressive and quite interesting to see after what you are searching for (afterwards it seems logical :))


Is there a startup that can help automate custom attacks on websites? Like guide the webmaster to look for holes in their setup. I'm guessing some security expert can do a good job educating new businesses on how to prepare for the big bad world.


Hi! In fact, that's exactly what we do at Detectify. Just check out https://detectify.com!


I think you just proved that writing an excellent blog post like you did is an amazing way to get new customers!! Maybe make it a tad more explicit in the post (or page) what detectify do. I personally had no idea.. but I just checked the homepage because I liked the design and was curious, and it's only then I realized what you guys were doing.



Check out Burp Suite as a good tool to run: http://www.portswigger.net/burp/

Owasp is also a good resource for learning: https://www.owasp.org/index.php/Main_Page


Check the site of the blog post...


For those who'd like to know more about xml-related attack vectors, here's a nice summary: https://pypi.python.org/pypi/defusedxml


Very cool hack. Is $10,000 around the top end of what Google will pay out? This seems like quite a serious bug as far as they go.


No,

You can see the general payout levels here: http://www.google.com/about/appsecurity/reward-program/ , normally the top payout is about $ 20,000, but the top payout (for Chrome) currently 2 people have been rewarded with $ 60,000. There is an overview of the top payouts though: http://www.chromium.org/Home/chromium-security/hall-of-fame.

Some payouts are $1337 ,$3133.7 or $31336 :P

Microsoft rewards even up to $100.000 for security issues in the latest OS (currently Windows 8.1)


These payouts are for product vulnerabilities; things that Microsoft and Google ship to customers; vulnerabilities that those vendors are effectively creating on hundreds of thousands of machines they don't own.


I'm surprised nobody has mentioned containers, e.g. Docker, as a way of limiting the damage from this kind of bug. In a container whose only purpose is to run the application, /etc/passwd should be as uninteresting as:

    root:x:0:0:root:/:/bin/sh
    bin:x:1:1:bin:/dev/null:/sbin/nologin
    nobody:x:99:99:nobody:/dev/null:/sbin/nologin
    app:x:100:100:app:/app:/bin/sh


I think they couldn’t read /etc/shadow, so it’s not that bad at first. But then they could surely access some configuration file of the application itself, probably containing DB creds and of course more information which helps to find more vulns.


It's shocking to me that baking "db creds" into a binary or configuration file is still so common that anyone would expect it to be true on a randomly selected server. Is this still the industry standard?


How else would you do it? If you use a configuration "service" the credentials to access the service must be baked in.


Well, I can think of a couple of ways off the top of my head, that I'm sure will be shouted down for being simplistic:

1) ident protocol, or something similar. On the internet, it's a disaster, but for machines all owned by the same organization, it makes sense.

2) ssl client certificate. this can be hardened in various ways like having the certs expire every ten minutes etc.


That's certainly not how we do it at Google.


Sadly but truly!


You should be aware that pixilating or blurring screenshots are likely not sufficient to ensure that the contents are unrecoverable.


I never understood why internal or external entities were included in XML. Can anyone explain what useful purpose they serve?


Exactly the same as #includes and #defines in C - they let you organize your code in multiple files, be more concise, and shoot yourself in the foot, repeatedly.

They were useful for document editing use cases - remember this was before SOAP and xml serialization, and sgml tooling that already supported this stuff existed. You can see the record of the decision here: http://www.w3.org/XML/9712-reports.html#ID5


So, when you have read access to googles prod servers, what else would be fun to do besides reading /etc/passwd ?

Getting the source?


The source is not generally accessible from prod servers - only binaries and supporting data, and only the ones running on that computer.

I guess it's possible you could find a computer that hosted both search and the codebase. But, since search is for external and the codebase is for internal, I'd be that they don't share clusters.


what if that file is per container and every software runs isolated? it's still a potential issue because you could retrieve other sensible information (log files?).


Sure. I was only addressing the concern of accessing source.


I'd say logins and passwords of millions of Google accounts would be the most valuable asset for a blackhat.


Yes golly you must be right, i bet google just uses rsync to copy the passwords of billions of user accounts to thousands upon thousands of machines all over the world, in plain text! It's probably in /var/lib/every_user_password_in_plain_text.txt!


Nope but if you have root in production servers you can just peruse the RAM.

Nice throwaway. LOL.


This isn't a root exploit. It serves up files that are readable by the serving process, such as /etc/passwd. You are aware, I hope because it's been this way for 20+ years, that despite the name there are no passwords in /etc/passwd, right? It's not considered a sensitive file.

  % ls -l /etc/passwd
  -rw-r--r-- 1 root root 2028 Dec  2 13:05 /etc/passwd


I'm kind of sad that this is a throwaway account because you're posting good responses, that are technically competent and are actually specific to the bug discussed in the article, to people who are either less informed or are talking about their vague general understanding of vulnerabilities rather than reading the article and actually discussing its contents.

Your posts are exactly the kind of thing I _want_ to read on HN. Is there a particular reason why you feel you can't post this under a general-use account?


Uhh, you do realize his 'throwaway' account is two years old with hundreds of comments? I don't know if he's partitioning, hoarding, or being playful in account naming, but that's probably a better track record than most non-throwaway accounts on this site.


I do now! Thanks for makin me feel better =)


To clarify what thrownaway2424 said, in case some people really are unfamiliar;

You can't take the password out of RAM. It would be pretty insane to store it in RAM once the login process is done.

This exploit can't read RAM. Being able to read RAM from a program other than the one you are exploiting is pretty unusual today (early operating systems were much less scrupulous, however). There are lots of scary local exploits that can do this by abusing the high level of privileges granted to drivers for things like HDMI devices, but I've never heard of a remote exploit that could read arbitrary RAM. You can sometimes convince a program to dump core if you have a DoS and can run ulimit.

We used to store passwords in /etc/passwd. The user database need to be public. So the passwords stored in it were hashed, and thus were thought to be secure. Along came the Morris worm, which used (among other things) MD4 password cracking to infect systems. I imagine there were less high-profile incidents as well, but the long and the short if it is we how use /etc/shadow for secrets, and /etc/passwd for usernames.

While not secret, I'd certainly call /etc/shadow sensitive, but its a small point.


The severity of this bug depends on the privileges and the setup of whatever user ran that service then.


And yet it was worth 10k to google.


It's probably cryptographically hashed. There is no reason to keep a raw password in RAM beyond the stack frame of the function that receives it from the client - at any point after that, just store & compare the hash.


It would still be catastrophic if they had access to the hashed passwords of a big number of users. People use weak passwords and they get cracked in no time if you have just a hash.

But as I said before, that also depends on some details about the setup that we don't know from this article alone.


They didn't have root, they had whatever user the XML parser was running as.


Looking for more serious bugs? It could be the first steps on a major privilege escalation.


Where would you look?


/proc/self/exe


With the kind of monitoring that Google has in place your access will last a very short time.


What kind of monitoring would you deploy that would raise an alert for a process opening and reading readable files?


SELinux. This kind of stuff would be where it really shines. A correctly configured installation would block and report access to files the application is not supposed to access. Maintaining it, especially for individual applications, is work, but it seems to me that on the scale of Google it may well be worthwhile.


Cheers to google for properly compensating these guys for their findings.


Well done. I had to deal with some similar issues with my own project, and they weren't legacy code either. This should push me to go through some of my code again.


That must have been be a nasty call from Sergey to NSA head quarters earlier this week.

"Sir, I am sorry to inform you that another backdoor has been found. We will introduce two more as agreed upon in our service level agreement."


Awesome work! The bounty is a bit low though.


I wonder how many of the blurred entries were NSA.


Just $10k?

This sells for at least 10 times more on the black market. Why would one rationally chose to "sell" this to google instead of the black market.

Some people don't break the law because they are afraid to get caught, but I like to believe that most people don't break the law because of the moral aspect. To me at least, selling this on the black market poses no moral questions, so, leaving aside "I'm afraid to get caught", why would one not sell this on the black market? Simple economic analysis.

Very serious question.


That vulnerability does not sell for 10x on "the black market".

* It fits into nobody's existing operational framework (no crime syndicate has a UI with a button labeled "read files off Google's prod servers")

* A single patch run by a single organization kills it entirely

* The odds of anyone, having extended access and pivoted into Google's data center, keeping that access is zero.

I'm not an authority on how much the black market values dumb web vulnerabilities but my guess on a black market price tag for this bug is "significantly less than Google paid".

Later: I asked a friend. "An XXE in a single property? Worthless. And at Google? Worth money to Google. Worth nothing to anybody else."


Exactly. Unless this could somehow be pivoted into write access, with the ability to modify server responses to clients (for phishing or installing malware), no black hat would care about this.


"dumb web vulnerabilities" that have huge implications could fetch a pretty penny for sure


No, they can't. Read the inverse of my bulleted list to see what makes money:

* Bugs that fit readily into operational frameworks (ie: it would be reasonable to have a UI with a button invoking that bug and/or any of the 15 other bugs like it)

* Bugs that can't be killed with a single patch cycle by a single entity

* Bugs that provide long-term access, or access that is unlikely to get your entire syndicate caught

Example of a potentially lucrative web bug: bug in Wordpress.

Example of a bug unlikely to be lucrative: "read any Facebook server file".

I know that sounds crazy and backwards, but I don't think it is.


I think you two disagree on what a "dumb web vulnerability" is.


If you want to look at it rationally you have to factor in the risks you are taking by selling it on the black market. These risk include:

- How will you whitewash the money? Alternatively how will you spend them on the black market? You can't buy houses, cars or stocks with black money.

- Will you get paid? - Secure anonymous payments that are guaranteed are not trivial. I don't know if there are escrow services for the black market, but this is definitely risky. We are talking about shady actors after all.

- Will you get caught? If do you will probably end up in prison.

When you take the above in to consideration I think most people would prefer $10.000 legitimate US dollars without risk to $100.000 that might end up giving you ten years behind bars.


Bitcoin would be the preferable way to get payed in this situation.


How would you escrow it so that you can be sure to actually get the funds? Sure they're not going to pay up front and it would be over-trusting to give a crack away on the promise of later funds, so ...


Your word is incredibly important for criminal enterprises. If you fuck someone over and somebody finds out, nobody will ever do business with you again (besides the whole 'getting shot' thing). Escrow services (by way of a middle-man you both trust) are only necessary for really big jobs. In general you pay first and get your goods once payment is confirmed.


I can see that working in meatspace but here we're talking about selling an idea on the web - the buyer is very unlikely to be able to track you so they're unlikely to front the money.

Suppose you found a bug, couldn't cash it in with Google because of where you live and so were selling it on. The buyer won't release the funds, would you really give up the goods? Even with an escrow, proving the transfer and performing the transaction with minimum risk seems problematic to me.


- the buyer is very unlikely to be able to track you

the buyer will probably be easily able to track you, if they are paying 100k for hacks on the black market, they would have the resources to find you easily


Yet they're getting the cracks from you .. which suggests you're good enough to be able to hide yourself away. Use anonymising proxies to connect to a machine that you Tor off to a BTC wallet that only takes in washed coins, or something. Even being able to spend 100k on [potential?] server cracks doesn't seem enough resources to be able to take down Tor?

If they try and trace you just send a spike!!1111one


true, but any small mistake in the process on your part can come back to hurt you, look at how silk road got taken down


That's not really a problem specific to Bitcoin only. I've seen Bitcoin Escrow services but I'm not sure which ones are trust worthy.


No, indeed I wasn't pitching that as a problem with BTC - just in general how can you ensure a secret transaction will go through. You'd need a trusted escrow, a trusted escrow would probably need to have a business address [and other things] for you to trust them ... but that means they'd be registered to handle money in all likelihood and that means records of your transaction that law enforcement could eventually get hold of?


Anyone who sells on the black market already knows the answers to these. Malware, botnet and black market security researchers also know all the answers to these. Let's just say that in general, it is actually trivial to launder money from black market transactions, as long as you don't get the attention of the feds and you stick to non-US markets.


They made $10k plus a huge amount of free advertisement for their company and services (security). I reckon this release alone will earn them far more than your estimated $90k difference.

Mind you, your point is certainly valid if this were a random hacker type.


very good advertisement indeed - i haven't heard about their service until today, and am now giving it a try.


If you donate to charity, Google will match your donation. You can buy a smile on your face for the rest of your life, knowing your exploit build a school in Africa.

If you manage to sell this on the black market, that money is worth half when turned into "legit" money that you can spend. If we leave aside "I'm afraid to get caught" do we mean "caught by the justice system"? What would happen if you sell your exploit to some cybermob and a few days later, some monkey on a typewriter, finds your exact exploit and publishes it online? Not your problem it is worthless now and some mob feels you sold them crappy gear?

As for the moral aspect. Think of anyone you hold in high regard, or have a loving relationship with. Selling an exploit that will be used for harm, might mean harm to those you hold dear.

Then there is this simmering thing in your subconsciousness. Some know how to put out that fire. Others wake up in a sweat years later, after a dream where their exploit is used to find and execute a political dissident. That is: You may very well come to regret a "bad" deed in the future, when your situations and responsibilities change. You won't lie on your death bed and think: "I wish I hadn't build that school, but taken the money and put a down-payment on my new bathroom."


You have a very strange sense of morality, IMO. I refrain to inflict damage to others for personal gain, it's really that simple to me. Other questions are complicated and conflicting, but this is quite clear cut to me.


[...] why would one not sell this on the black market?

Because it is wrong to harm others for personal benefit?


I agree with you however companies are completely the void of morality their only purpose is profit and they will hire shady lawyers to interpret the law in their favor fire people without giving it a second thought or collude with other big companies to keep their employees wages low so why would i treat them differently.

In business morality is a luxury that some companies can't afford and most choose not to have so it shouldn't be expected.

The only thing preventing you from selling it on the black market is the potential fame and business you may get by being able to reveal your find which may or may not be worth it.

That 10k is not really much of an incentive from a business perspective.


companies are completely the void of morality their only purpose is profit

Companies are groups of people and have many different purposes. I understand being worried about the rise in corporate oligarchy, but your argument is itself the attitude you are accusing companies of. The problem isn't companies being immoral, but people rationalising behaviour that they know to be immoral.


That's probably because i treat them the same way they treat me.

Their attitude makes sense and sometimes it's actually necessary for a companies/entities survival.

We all face hard choices between what's moral and what's best for our own survival the only difference is companies put any amount of small profit over morality not just survival.

I don't make the rules i just play the game.


Are you a bot with a database of platitudes?


>I agree with you however companies are completely the void of morality their only purpose is profit and they will hire shady lawyers to interpret the law in their favor fire people without giving it a second thought or collude with other big companies to keep their employees wages low so why would i treat them differently.

Perhaps, but even so, when you sell a vulnerability to the "black market" you don't just harm Google. You also harm people the vulnerability will be used against (to fish their credit card details, compromise their servers, etc).

(Perhaps in this case, for technical reasons you can only harm Google with this thing, not sure. But still, talking in general).


I would never consider selling it on the black market. That others are lacking moral principles is not a justification to go the same route.


I don't agree with you that "selling this on the black market poses no moral questions"; this gives access to Google's production servers, which can really harm Google in very bad ways. Unless Google has done specific very bad things to you and you want retribution, why would you do that to them?

But I agree with you that $10,000 doesn't sound like much, for such an exploit, and for a company like Google.

Edit: corrected typo "$10" -> $10k.


It's $10,000, not $10. Detectify is based in Europe where they use . to group digits.


Yeah, it was a typo; I meant $10k, which does seem quite low, no?


1. Because you'll be dealing with organized criminals, which is dangerous and brings problems beyond the mere possibility of getting caught.

2. I'm assuming your basis for "no moral questions" is because you'd be hurting Google, which is a corporation, not a human, and can therefore be treated with a different set of moral values. (If this assumption is incorrect you need to clarify.) However, selling this exploit on the black market may very well be leveraged to affect a lot more people than just Google. People that will be phished, scammed and extorted. That (I hope) does pose moral questions, doesn't it?

The problem is, you can't sell an exploit on the black market on the condition that it may only be used to (say) "steal from the rich and incorporated".

3. Finally, $100k earned on the black market is not worth the same as if it was legitimate, because it is very hard to spend. I can imagine that a process of white-washing could easily knock 50% off the value, as well as taking a lot of time and effort. Then you got $50k, which is already a lot closer to $10k.


How does selling an exploit to criminals not pose a moral question?


> To me at least, selling this on the black market poses no moral questions

That's probably a reflection of your own morals. There are millions of people that could be affected by this bug, so I'm not sure how there isn't a moral question here.


"Why would one rationally chose to "sell" this to google instead of the black market."

Exactly because of that. One is legal the other is not


Your economic model does not take into consideration the value of recognition, which is a very high motivator, often more important than money.

If they sold it on the black market, they couldn't brag to anyone that they hacked google.


I fear that your economic analysis is way too simple.

You should include damage to the company's reputation, should this get leaked. Specially since they work with security - and who would trust their security to people who sell vulnerabilities to the highest bidder?

This could cost than much more than your quote.


The purpose of the bounty prize is not to outbid or compete with the criminals.


Maybe this weird and obsolete service was run on a small subset of servers that is not really worth that much. I would assume your journey would end up right there at that one (or n of the same) machine.


You are a scumbag, but the math is right. You would need to discover 10 of these a year to make a living wage in SF - maybe 50 if you are a team of 5. They should pay what they pay their engineers.


Maybe I just read it wrong but it sounds like Google made an opening offer and the security group felt it was sufficient and decided to take it instead of negotiating. Maybe I'm wrong and they'd already given the details and Google was just trying to keep them happy and provide some cash for what otherwise would've been a Good Samaritan, open-source contributor type of report.

As long as Google is willing to negotiate, I don't see a problem with a group being satisfied with 10k and taking it.


Hi!

Bounties are always awarded after the bug is disclosed[1].

We constantly[2] upgrade the bounties whenever we feel like we should be paying more, and we will continue to do so. We also increase the rewards from the amounts in the price list if we think they result in a higher impact than what the reporter originally suspected.

We aren't actually trying to out-pay the black market. Overall, our goal is to reward the security community for their time and help for their security research, since we both have the same goal in common of keeping all of us safe (either Google services, or open source/popular software[3]).

And if you are interested, you can follow news on Google's VRP here: - https://plus.google.com/communities/103663928590757646624

[1] http://www.google.com/about/appsecurity/reward-program/ [2] - http://googleonlinesecurity.blogspot.com/2010/11/quick-updat... - http://googleonlinesecurity.blogspot.com/2010/11/rewarding-w... - http://googleonlinesecurity.blogspot.com/2012/02/celebrating... - http://googleonlinesecurity.blogspot.com/2012/04/spurring-mo... - http://googleonlinesecurity.blogspot.com/2013/08/security-re... - http://googleonlinesecurity.blogspot.com/2013/06/increased-r... - http://googleonlinesecurity.blogspot.com/2014/02/security-re... [3] - http://googleonlinesecurity.blogspot.com/2007/10/auditing-op... - http://googleonlinesecurity.blogspot.com/2011/08/fuzzing-at-... - http://googleonlinesecurity.blogspot.com/2013/10/going-beyon... - http://googleonlinesecurity.blogspot.com/2013/11/even-more-p... - http://googleonlinesecurity.blogspot.com/2014/01/ffmpeg-and-... - http://www.google.com/about/appsecurity/research/


You're right Google should pay out 100k for all exploits turned in.


Perceived chance of being caught * cost of punishment.


Because they are a legitimate company that sells security services.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: