This sounds like a new name (and possibly more executive support) for what those guys have been doing for years now.
The guys in that article have been working on finding bugs in non-Google products for a little over two years and you can see their past results through the advisory credits they've received at Microsoft and Adobe as well as from open source projects like FFmpeg.
My read of the announcement is that they are now hiring vulnerability researchers to work on arbitrary targets, the way security product companies do to staff their research labs.
Starting back in the 1990s, companies like ISS and McAfee staffed teams of bugfinders to comb through high-profile software for vulnerabilities, and kept score with advisories (and with IDS/IPS signatures and scanner checks for those bugs, which they'd have a semi-proprietary interest in, since they found them). They'd use this to market their expensive products. The researchers generally had a long leash so long as they were (a) looking at software that customers might care about and (b) were actually finding things.
What Google seems to be doing is starting a lab like that, but without the hooks into products and marketing; instead: Google has cash, wants to retain security researchers, and so will throw money at a vulnerability lab run for the common good, partially for the PR win, partially for the knock-on benefits to Google of having lots of good security people, and yes, partially for the good of humanity.
(FWIW: I worked at what I think was the industry's first vulnerability lab, at SNI, which eventually became McAfee's vulnerability team).
... where we discovered (or were at least first to publish) two whole new attack classes, tracked down something like 6 different super expensive security products and got them up in a lab, designed and implemented a new programming language, and succeeded in giving a giant middle finger to surveillance software.
It's the most fun I've had in my whole career.
Not for nothing, but working at a software security consultancy is a close second; you lose the freedom to choose your targets (at least 80% of the time), but the work is the same.
That's fascinating! I'm curious what sort of education and experience you had to land a job like that, do you mind sharing? I'd love to work in a lab like that one day, but I'm not sure what is considered "good enough" to get a career in security rather than just a hobby.
Thank you for doing the crypto challenge by the way. Was a lot of fun, and I'm looking forward to the next set :). Has anyone else finished it in c# yet?
slightly off topic but, like your appsec reading list above, are there any crypto resources you value for someone taking the self learning route into real world crypto?
I've looking into buying Grey Hat Python as more of my job starts to require scripting, but I'm put off by that first review (and overall, the reviews aren't glowing). Interesting that it comes recommended from you, someone whose opinion I respect.
I don't know how long ago your list was made; would you still recommend Grey Hat Python?
I didn't think Grey Hat Python was a great book, but it serves a valuable purpose that I'm not aware of another book supplanting; which is that it shows you that you can use underlying programming to do security-related tasks, instead of being limited to just using tools.
The widest chasm that separates security professionals is the one between those that can only use tools other people provide and those that can write their own tools. And more than just being able to write them, but being able to write them quickly enough to be of use during an engagement (which usually only lasts between 1-3 weeks).
A lot of security testing at the non-entry level is putting together specific tools to accomplish an engagement-specific task. You don't generally spend a lot of time building giant edifices, it's usually lots of small things that you mostly throw away between gigs (minus whatever underlying libraries you favor using as construction components).
In that regard, I think Grey Hat Python is still a good book to introduce you to the idea of using real programming to do hacking, even if you never write a line of Python on an engagement.
I don't think it's an especially great programming book, but it is a great cross-section of the programming tasks you actually do when working in a vulnerability research lab (or software security consultancy, for that matter).
After lightly reading through both books, I think Gray Hat Python is a great book for more advanced security concepts, especially on the reverse engineering and exploit dev side of things, but isn't a very good book for learning Python or programming.
Violent Python on the other hand is a great book for beginners to Python and programming, and it teaches both pretty well, but it only goes into surface level security concepts for the most part.
Gray Hat Python is closer to a Windows API/x86 assembly book than a Python one. Violent Python is a real Python book and mostly covers general information security and network security concepts.
Gray Hat Python is also purely application security. Debugging, reversing, hooking, writing shellcode, exploiting... Violent Python is almost entirely network security, with one chapter on forensics. Exploit dev vs. exploit user.
It depends on your experience level and what you want to actually learn. If someone was brand new to Python, application security, and even programming, I'd recommend reading Violent Python first and then Gray Hat. If someone has more advanced security knowledge and has some decent programming skills already, I'd probably tell them to skip Violent Python.
Or if they wanted to focus on appsec vs. netsec, I'd direct them to one or the other based on that. If you want both, you should definitely read both.
The list author says:
"I had a CISSP book here as a joke, but then realized that someone who clicked "buy whole list" would end up accidentally owning a CISSP book. Far better that they accidentally end up owning David Foster Wallace's most accessible book. The state fair essay in particular, worth the price of admission."
It could also be that their semi-formal third party security team is now a formal team. Previously it was less than a full time project but more than a 20% project for many of Google's vulnerability researchers.
I guess that'd have the same net result though since it's much more feasible to hire for if it's been formalized.
1) What steps will Google be taking to ensure timely vendor acknowledgement and fixing of issues? It often seems a public disclosure is the only thing which will trigger them to finally act (and then they like to fire back with litigation). If a vendor simply ignores Google, will the bug go unannounced indefinitely? Is there a reasonable timeline in which all bugs - fixed or not - are made public?
2) Will Google take bug submissions from 3rd parties and offer any degree of anonymity and/or protection for the bug-finder? I routinely come across gaping security flaws but when we have an over-zealous judicial system and the CFAA, it's often not worth the personal risk to get something fixed.
The answer to #1 is in the wired article about Project Zero:
"When Project Zero’s hacker-hunters find a bug, they say they’ll alert the company responsible for a fix and give it between 60 and 90 days to issue a patch before publicly revealing the flaw on the Google Project Zero blog. In cases where the bug is being actively exploited by hackers, Google says it will move much faster, pressuring the vulnerable software’s creator to fix the problem or find a workaround in as little as seven days."
Regarding point (2): if you're thinking about things like SQLI and XSS vulnerabilities in websites, that's not the kind of research Google is likely to be doing. But if you're thinking about finding memory corruption flaws in software you install on your own machines: you have very little to worry about from the CFAA to begin with.
Good to know, thank you. What in your opinion is the best way for someone to submit a security vulnerability (the sort which could land them into legal trouble, they don't have permission to be penetration-testing, could be viewed as malicious, etc) or is it simply not worth the risk?
You need to clarify. Are you talking about submitting a vulnerability in someone else's website, or in a product you installed on your own computer?
In the latter case, it's pretty straightforward. Your legal liabilities in that situation are (so long as you don't demand money) civil (you many have violated a click-wrap that will probably prove toothless against security research). You can tell the vendor directly if you like (from experience, this isn't fun; you'll probably spend a couple hours in tier 1-2-3 tech support hell). If it's an important target, you can also talk to bug bounty programs.
In the former case: it's not not straightforward. Start by scouring the target's website to see if they have a disclosure program, which will probably equate to permission for looking for flaws in their site. If they don't, it's very possible that you've broken the law in whatever process you used to find the vulnerability. Submit anonymously and carefully. If they prove themselves to be cool, you can always take credit later.
you would think they would be more focused on vulnerabilities of websites and the underlying technology as Google is an internet company (though they have morphed into a wider spectrum). Most recent public security outcries have related to internet services so my assumption that would be their focus.
Part of me hopes they tread in this gray area so that we are forced to address issues with the CFAA and how its presently enforced.
It doesn't matter if you are sitting on a bean bag chair at a google lab, you have no defense in criminal court if you are unauthorized to test for a vuln in someone's system. Disclosing a flaw is sufficient grounds for a felony.
This isn't a "grey area". It's illegal to test web applications run by other people for security vulnerabilities. The examples you've seen of above-board security research targeting web apps fall generally into these buckets:
(a) Web apps run by other companies but which are available for download to run on one's own machines
(b) Web apps run by other companies that have published bug bounties or other forms of permission for testing
(c) Web apps tested carefully and, at first, usually anonymously (or, if not, then by researchers working from jurisdictions where CFAA is hard to enforce)
Right, they mentioned researching bugs like Heartbleed. Discovering and disclosing bugs on underlying open source technology hasn't been the target prosecuting under CFAA in the past so I follow you there.
If a prosecuting attorney wanted to, couldn't they charge the researchers that discovered the bug? CFAA doesn't specify intent to do harm...
If someone used the disclosure of the bug to do harm, you have assisted in unauthorized access of information.
I don't understand what you're asking. The Heartbleed research they didn't probably didn't have any CFAA implications for them.
The distinction isn't between "open source" and "closed source". It's between "software running on machines you own" and "software running on other people's machines".
Hundreds of thousands of vulnerabilities have been discovered in the past decade and a half. None of those researchers have been prosecuted for disclosing the vulnerabilities. If you disclose a bug and someone unrelated to you breaks the law with it, CFAA does not say you're liable.
haha not to drag this conversation out but just a hypothetical scenario:
I download and install Drupal on my web server. I find an SQL Injection vulnerability in the login form. I post on a public forum the vulnerability where someone proceeds on their own to deface a government website using that knowledge. You don't think they would charge you in assisting?
No, they would not. The equivalent of this scenario happens all the time. In the one case I'm aware of where the developer of exploit code was found criminally liable for its use, that developer had a direct relationship with the person who actually did the exploiting (for commercial gain).
> I'm aware of [one case] where the developer of exploit code was found criminally liable
You're probably referring to Stephen Watt (a.k.a. "Unix Terrorist"), who wrote the POS sniffer. His friend, Albert Gonzalez, then used it to steal 170 million credit cards from TJ Maxx and other vendors back in 2006-2007.
The law doesn't require relationship or commercial gain. I can't share my story (and I'm sure there are others out there) but I can assure you: if you mess with the wrong people, the CFAA has no bounds
Look at it this way: what if Drupal's own security committee posted a description of a SQL injection vulnerability (as they often do when issuing patches), and another person takes that info and uses it to attack an unpatched Drupal government site? Would the security committee members be charged?
I agree, this was probably a bad example because involvement seems a bit of a stretch (although posting on a known black hat forum might change some variables).
What if you had only told your friend of the particular vulnerability and he used that information to write an exploit?
Writing an exploit is also not illegal. Using the exploit is a problem.
If you tell your friend about a vulnerability, and he writes an exploit and then uses it to break into a retail chain and steal credit card numbers, then you have a problem. Not because finding the vulnerability is illegal, but because there is now a chain of evidence that might link you to something that is unambiguously illegal.
You won't be held liable for unwittingly enabling the crime; you'll be accused of sharing the vulnerability with the express purpose of enabling the crime (actually, maybe, technically, with the purpose of enabling any crime; conspiracy laws get weird). That intent will be something that needs to be proved in court.
That liability is still a stretch, doubly so if, the moment you find out that your friend is doing something crazy, you inform the authorities.
> This isn't a "grey area". It's illegal to test web applications run by other people for security vulnerabilities.
Scanning for heartbleed is a good example of why it may well be - through a normal, authorized connection, it becomes apparent if the implementation is vulnerable.
Or are you referring specifically to sending a malformed heartbeat in the context of an authorized connection?
Sorry I should have been more clear - I meant the former case of finding vulnerabilities in a website/API/web-service/etc. Your answer was comprehensive in either direction though, and thank you
For point 1, The Hacker News podcast (not related to this site, but Space Rogue's much older Hacker News Network) ran a list of all unresolved advisories in the Zero-Day Initiative older than 30 days. Apparently just being on the list, plus the popularity boost from Space Rogue's podcast, was enough to shame some companies into closing old vulnerabilities.
I am surprised that they didn't use something (other than Google Code) where one could more easily search by vendor or product, or other kinds of tags like "privilege escalation possible", "CVE-2014-0160", or "stack overflow" (as far as I know Google Code Issues doesn't support tags of any kind; maybe I'm wrong?), but I can see the appeal of using off-the-shelf code from somewhere else in Google.
As of now, this database contains some strange entries. Probably they have issues with access permissions?
01 | Invalid | This is a test
49 | Invalid | <please do not file bugs here>
50 | Invalid | <please do not file bugs here>
51 | Invalid | Random Guy Has Access To File Bugs
52 | Invalid | Google PR doesn't respond to press inquiries
53 | Invalid | The issue is the blog
54 | Invalid | FR
55 | Invalid | <please do not file bugs here>
56 | Invalid | hello
Another article or comment I read about this (or maybe an interview, IDK) highlighted the main reason behind this endeavour: more bugs squashed means a safer internet, a safer internet means people will be more likely to click on ads. Because ads have a bit of a trust issue; ad networks have been used to distribute malware via legitimate sites, and sites behind ads have frequently been serving malware themselves.
So basically similar to other of Google's 'free' endeavours (Chrome, SPDY), this is another project intended to make the web safer, faster, more trusted, which by extension leads to more ad impressions / clicks.
Hmmm... job-for-life with enforced zealous national patriotism, vs five-year job with enforced zealous corporate patriotism (and everything is so god-damned colourful).
They'd have a tough sell to get the NSA employees I would think.
The NSA pays quite well, and has excellent benefits. Also, if you want to work on really seriously challenging mathematics outside of academia, the NSA is close to being the only game in town.
The NSA pays government salaries, if you want real money you need to be at an outside vendor (a la Snowden). I'm sure the benefits are great, but so are Google's.
Time for some trust-building with the community, after that um, unpleasant discovery that NSA has been up in that Google, and Google willingly accepted.
About Google, Facebook, Apple, Microsoft etc, we have seen the list, Ive got to say, scumbags.
They sold us all out and threw us under the bus.
Lavabit - a simple company put up more of a fight than a multi billion dollar giant.
That fibers have been tapped by state actors was well known pre-Snowden (I recall reading a story about how subs were used in Russia etc), yet Google only completed encrypting their private fibers post-Swnoden.
The assumption pre-Snowden was that only "hostile states" did that kind of thing, e.g fibres in and out of China were probably tapped, but most of the internet wasn't. Because, you know, search warrants do exist.
It's rather arrogant to say "everyone should have known". Snowden's leaks have been making waves for a year solid now exactly because he showed that reality was the worst case scenario only the most extreme of the extreme postulated previously.
Sure, but as far as we know Google was not aware of it (it = US internal taping) and introduced encryption of private traffic to counteract it. I'm not aware of articles proving they knew about the taps. Are you? (yes, we can speculate they did, but let's say it like that)
There's also the fact that Google only started challenging the FISA court post-Snowden. To my knowledge, they're not even challenging the unconstitutional spying, just the gag order preventing Google from telling you how frequently Google turns over your data to the feds.
This isn't just some theoretical, might be spying on "those people" sort of thing either. If you're reading this website, you probably are/have been a target:
Since Google openly acknowledges that Google never really deletes anything from gmail and other services (disk is cheap), they're providing information about you that even you have long forgotten.
And finally, if you say "I have nothing to hide, I have nothing to fear" then you have completely missed the point of how this stuff works. Remember they target you, everyone who contacts you, and everyone who contacts that extended group.
With that information, they find out who you love, and which ones of those people have pressure points. "Viraptor, it has come to our attention that your father is evading taxes. You wouldn't want anything bad to happen to your father, would you? We would like you to do some work for us in your role as sysadmin. Then we can forget about that incident..."
Many strawmen, but nothing relating to my comment, which was "do we know Google knew about the taps". I'm not aware of any article proving that was the case.
Re. the first link - companies of the size of Google will meet with the NSA at some point. They have to comply with many regulations and will have private chats at high level. They even willingly run NSA's software (selinux). This doesn't prove or disprove cooperation in communication taping.
Just one of your points I wanted to address.
> Google never really deletes anything from gmail and other services (disk is cheap)
Also, data invalidation is hard - probably every company that's big enough should have it in their ToC, unless they assign a full drive to each customer and do hardware wipe on it when something is deleted. Deleting a file is essentially just setting a "this is deleted" flag, until other data overwrites this. I'm willing to bet none of the companies you interact with guarantees your data is physically deleted. Everyone should be aware of that.
You may want to look at an article I wrote in May 2013, which was the first to disclose that Google was challenging two secret National Security Letters in court. This was before anyone except Glenn and Laura had heard of some guy named Edward Snowden:
http://www.cnet.com/news/justice-department-tries-to-force-g...
There are other examples as well, like the Feds' subpoena for search logs that Google fought in court and mostly won. You may recall that Yahoo, AOL, Microsoft received the same subpoena but did not fight the Feds in court; they instead quietly complied.
“Today, Mehdi added some detail concerning what actually happened when the request from the Government was made. First, the Government had asked for information that could identify people on an individual basis (most likely, an IP address). Microsoft declined this request, and instead handed the Government a watered down version of data, which Mehdi made clear did not include personal information. The information provided by Microsoft, Mehdi said, consisted only of a sample of search terms and their frequency, as well as a random sample of pages in the MSN Search Index.”
Just a minute of research would have led you to this, but of course, that would run counter to your pro-Google bias.
> Can't take this seriously any more. Used to like Google, not any more.
> Remember this was a company co-operating with the NSA.
I don't have the same suspicions you do, and I don't know of any evidence of Google willingly co-operating with the NSA beyond their legal obligation to, but it might make more sense for Google to spin off this team into a non-profit organization that they fund. Project Zero would get more credibility and independence, and maybe more publicity and contributions from the community. Maybe there's some kind of financial benefit for Google separating it out of it's org, but I don't understand corporate structures or finance even a little.
Cooperating when they had a court order compelling them to. The alternative would be to shut down all services and see people go to jail. They didn't have a choice.
The guys in that article have been working on finding bugs in non-Google products for a little over two years and you can see their past results through the advisory credits they've received at Microsoft and Adobe as well as from open source projects like FFmpeg.
For example see Ben Hawkes on this list of Google CVEs in other companies' products: https://www.google.com/about/appsecurity/research/