"Unsafe" internet is not an inconvenience for Google. It's a mortal enemy.
With botnets all around clogging networks they will not be able to serve videos, search results or ads. With malware stealing your credit card details you will not buy stuff online and firms will stop buying ads.
Basically, it's not a matter of choice for Google to make internet safer. It's an absolute necessity long term. The big questions is exactly _how_ to make internet safer - and this announcement shows they at least have an idea where to start.
It's incredibly exciting to consider just how much talent Google has managed to amass on this team. Say what you will about whether it's an entirely altruistic endeavour or not, but I don't think anybody can dispute that devoting a team of this calibre to vulnerability research can turn out as anything but great for the internet as a whole.
I've seen Ian and George work in passing, and if the rest of the team is half as capable as they are, I'm expecting some really great findings to surface from the project.
Are there any good online resources that someone experienced in this field would recommend that go through the basics of how to perform this type of bug hunting?
I have a background in computer science but was never given the opportunity to take any classes in reverse engineering and exploitation of software. Thanks!
If you want to go in-depth I learned a lot by just reading interviews (and following the links) from the 'How to Break Into Security' series of KrebsOnSecurity, here's the ones I had in my bookmarks (there are more if you use the search)
I'm impressed by djb's MCS 494 ("UNIX Security Holes") course. This was kind of famous at the time because students were taught to find real security holes in software as homework (and they sure did).
You might enjoy A Bug Hunter's Diary by Tobias Klein.
It's a detailed series of technical essays about vulnerabilities the author discovered in real software, how he went about finding and reporting them, and what happened. The appendices are helpful for those without a background in some of the techniques he uses.
The insight there is that one should always try to wrap criticism with praises: people don't like being told that they suck at their job, even if it's true. If instead of showing themselves as destructors, they'd adopted an image of mentors or teachers, things would've gone way better. Hopefully Google's Project Zero will be wiser than the IBM team on this point.
Note that this is even truer when criticism comes from an outsider, and Google's team will be doing exactly that. If they also deal with companies whose culture is very much reputation based (like in Asia), they'll have to be even more cautious.
I think providing unsolicited advice is always going to be fraught. Showing up as "mentors" and "teachers" is not going to go over well if the person you show up to teach thinks that you don't know what you're talking about. It's certainly possible that a lot of people will welcome the help, but it seems just as likely that people will say, "You come in here and think that you know our applications, but you don't know the history and the specific compromises we decided to make, etc, etc."
One problem I think is that no one ever writes the story of the major bug that got fixed in time. If you could just check the counter-factual of what would happen without security upgrades, a team like this could build a reputation for saving a company millions of dollars and reams of bad PR, and they'd be more likely to be welcomed. As it is, it can be easy for entrenched interests to make the case that security-minded people are just obsessive because, "Hey, we haven't had a breach yet!"
I meant that mentor thing in the context of IBM. I agree that it would not be much better in the case of Project Zero.
That said, I still think that a positive approach (positive criticism) cannot be worse than plain critics.
> "You come in here and think that you know our applications, but you don't know the history and the specific compromises we decided to make, etc, etc."
That's exactly the sort of answers that team should prep for: it is obvious to me that whatever compromise I made for my software stack, if there's a security issue, I will have to reconsider them. The whole point is to not rub it up my face for me to accept the issue more easily (not everyone is an adept of egoless programming). I was also saying that with the perspective of the Sony situation: in Japan, losing face is an extremely serious matter. I don't know how this situation was handled by this guy though: perhaps he did all he could to manage their feelings. It's clear to me though that doing it the IBM black team way did is a recipe for failure.
Seems like the lesson is that software QA should be well-separated from development, and that IBM had, for a while, a spectacular success with that approach.
"And what does Google get out of paying top-notch salaries to fix flaws in other companies’ code? Evans insists Project Zero is “primarily altruistic.”"
I think it's great that Google is trying to make software more secure, but I don't believe there's an altruistic spirit behind it. I think these researchers do search for bugs in software other than Google's, sure, but it's software that Google uses to run its services, so in the end the aim is still to make their products secure.
No. Google already does that, better than any other company on the Internet. From the announcement, this project is different:
We're not placing any particular bounds on this project and will work to improve the security of any software depended upon by large numbers of people, paying careful attention to the techniques, targets and motivations of attackers. We'll use standard approaches such as locating and reporting large numbers of vulnerabilities. In addition, we'll be conducting new research into mitigations, exploitation, program analysis—and anything else that our researchers decide is a worthwhile investment.
Unless the word "any" means something different to Chris.
I think it's slightly wider than that, a web with a reputation for being insecure drives people away from the web, and therefore away from Google services.
There's a few Google projects where the basic aim seems to be "improve the internet/web".
Still selfish from Google's perspective, but it's still something I can get behind.
Obviously Google makes more money when the web and 'software' has a better reputation. Even 'altruism' itself can be a business model for a company the size of Google if this improves their image and therefore their product sales.
The amount of money they hypothetically make from this initiative is so hard to quantify that at least it doesn't seem to be a business-driven decision. It seems to qualify as 'altruistic' to me.
Just think of the potential PR fallout if someone important clicks on an ad on a Google site and their computer gets exploited. What is mitigating that risk worth to Google?
I wonder if there's a possibility, using langsec(language based security) soomething similar to build a generic tool to solve this stuff once and for all ?
Google's (GOOG) interns held coveted positions even before being featured in the film "The Internship," which hit theaters this weekend. On average, interns there are paid $5,800 monthly, while specialized software engineer interns make as much as $6,700 per month, according to the job-rating website Glassdoor.
Remember that Google pays interns fairly well. It's not like he's running copies and picking up coffee for people at some random office.
Considering that tech jobs I’ve seen advertised in England pay between 1/3 and 1/10 of what I would expect a similar job in the US to pay, that seems about right.
My friends who did internship at Google were paid almost like engineers. The main differences is that the recruiting process is lighter because you work there for a few month only and of course you have less responsibilities.
I believe he's still an undergrad, I don't know about google but here an intern can be anything from a pre university student to someone finishing a phd.
Exactly. It’s not unheard of for “intern” to just be HR’s shorthand for “someone who is working here for a fixed timespan before returning to a university”. I know of a 40 year old professor who spent a summer as an “intern” at Apple circa 1990.
This is patchwork, at best. It may slightly improve posture...they'll hunt for bugs in already published software, instead of demanding strong security principles for software engineering.
“The software security industry today is at about the same stage as the auto industry was in 1930" ... "it looks fast, goes nice but in an accident you die.” ... "The major shortfall is absence of assurance (or safety) mechanisms in software. If my car crashed as often as my computer does, I would be dead by now.” -- Brian Snow, Former Technical Director of the NSA, We Need Assurance http://www.research.att.com/talks_and_events/2008_distinguis...
“Most programmers think that getting run-time errors, and then using a debugger to find and fix those errors, is the normal way to program. They aren't aware that many of those errors can be detected by the compiler. And those that are aware, don't necessarily like that, because repairing bugs is challenging, and, well, sorta fun.
You are not giving a programmer good news when you tell him that he'll get fewer bugs, and that he'll have to do less debugging. Basically, we still live in the dark ages of programming, not unlike the time engineers were learning about boiler technology by figuring out why a boiler exploded, scalding people to death (remember the Therac-25?). People will probably have to die in order for "software engineering" to be a true engineering profession, instead of the buzzword that it is today. Sad but true.” -- Mathew Heaney
As someone said here on HN, it took two major disasters for people in the Netherlands to build coastal defence structures. We only seem to learn from disasters.
> This is patchwork at best and may slightly improve posture...they'll hunt for bugs in already published software, instead of demanding strong security principles for software engineering.
So how does it work for Google to demand strong security principles of Adobe for their Flash product? Finding 400 game over bugs on their own then telling Adobe about them seemed to get more traction.
Exactly. This doesn't seem to be popular around here, but unless secure coding practices are mandated, commercial aspects (first-to-market etc.) will take precedence. This is an industry-wide policy issue. Regulation works -- we should really take a hint from Federal Aviation Regulations.
"Unsafe" internet is not an inconvenience for Google. It's a mortal enemy.
With botnets all around clogging networks they will not be able to serve videos, search results or ads. With malware stealing your credit card details you will not buy stuff online and firms will stop buying ads.
Basically, it's not a matter of choice for Google to make internet safer. It's an absolute necessity long term. The big questions is exactly _how_ to make internet safer - and this announcement shows they at least have an idea where to start.