Hacker News new | past | comments | ask | show | jobs | submit login
Google discloses three severe vulnerabilities in Apple OS X (cnet.com)
351 points by bhauer on Jan 23, 2015 | hide | past | favorite | 133 comments



Tech journalists insisting on calling these "zero-days" (which, to its credit, CNet changed, though Ars and ZDNet are guilty of it) is really kind of irritating.

A zero-day is, by definition, an exploit that the vendor has had zero days to react to. It's not just an unpatched vulnerability with a proof of concept exploit available for it.

Apple has been aware of these issues since October.


I agree that tech journalists are at fault here. It might also help if Google wasn't calling this Project Zero.


Heh, I think that's valid. Though IIRC Project Zero is called that because their mission is specifically to eliminate zero-days - that is, find and disclose them to vendors before the bad guys start using them.


This is correct. Strictly speaking, its a 90 day.


I disagree with this interpretation. It remains 0 day until it is patched or has a work around (disabling the service). Further still the date aging starts begins when the bug is disclosed publicly, or broadly if not publicly. Internal disclosure doesn't count. It remains 0day even if you share it with a friend. The measurement of days has to do with its usefulness (to an attacker) and that begins to diminish after wide disclosure. Then again, the term is also completely irrelevant.


> I disagree with this interpretation.

The problem with what you're trying to put forward here is that you think this is an interpretation. It's not. A 0-day has a very strict definition.

You can't just choose random words or expressions you don't understand without looking them up and decide they mean something else because you thought they did. Otherwise, the annals of medicine would look very different.


The actual usage is more fluid than that. It wouldn't be a common term if it actually only applied for the first day of every vulnerability being known.

"An attack on a software flaw that occurs before the software's developers have had time to develop a patch for the flaw is often known as a zero-day exploit."

http://www.tomsguide.com/us/zero-day-exploit-definition,news...


The truth is that even with a strict definition, it is subject to change.

The meaning of a word or term today may not be the same tomorrow, languages evolve with the way people communicate, not with the way they get defined.

0-day is a good example, because people outside the security field only care about whether they're vulnerable or not, not about the intricacies of the term.


You literally can.


How about we all agree that you can't/shouldn't, though? Because taking a contrarian position for no other reason than the fact that you literally can does nothing to advance a discussion.


I think the down voters are missing your clever joke regarding "literally" - I appreciated it.


I "get" it. It just doesn't strike me as much of a joke.


If we can't agree on what words mean, arguments about them are pointless.


I think he means "literally literally", and by that I mean "figuratively".


Do you have a source for that definition? Because every definition of zero-day I've ever seen has to do with the days it has been known to the vendor.

> A zero-day (or zero-hour or day zero) attack or threat is an attack that exploits a previously unknown vulnerability in a computer application or operating system, one that developers have not had time to address and patch. It is called a "zero-day" because the programmer has had zero days to fix the flaw (in other words, a patch is not available)

http://en.wikipedia.org/wiki/Zero-day_attack


So does that mean that it is no longer a zero-day after a day has passed? Or does it remain a zero-day because it first got released in the wild before vendors had any awareness of it?


Well, if you use a 0-day exploit to break into computer systems, but nobody discovers the hack, or they discover the hack but not the method used for the hack, I guess it remains a 0-day exploit...


If the vendor (loosely defined) is aware of it, but they can't fix it in a single day, then it becomes a 1-day, I suppose.


Given the typical lead-time for an article to appear after the vendor is notified, the news begins to spread, publications take notice, assign someone to write a story, and the story appears, then you're arguing that the term "zero day" should, practically speaking, never appear in the press.

I'm not sure that's a helpful definition. It's pedantic to the point of no longer applying to any real-world situation and thus sort of pointless.


I don't have a source as reliable as Wikipedia. I base my definition on how it is used in the field of reverse engineering which I've been in for a long time. In any case, people can stick with the wiki definition. Not important.


Why do you insist on reverse engineering the meaning of words, when you can just look up the source code in the dictionary?


Some words are ambiguous enough that a dictionary cannot fully describe. I think this is one of those. Second, wikipedia is a horrible source to trust for anything debatable.


It's debatable that the definition of "zero day exploit" is debatable. Do you also mistrust what Wikipedia has to say about immunization, global warming, homeopathy and evolution?

He asked "Do you have a source for that definition?" and you said you didn't have a source as reliable as Wikipedia, then attacked the reliability of wikipedia, which leaves your source in doubt. So what IS you source?

That's what he asked in the first place, and now that you're hopefully done casting doubt upon his source, you still haven't answered what your source is yet, except to say that it's less reliable than "a horrible source to trust for anything debatable". So please give us a link to your source, so we can see it ourselves.


It is called a "zero-day" because the programmer has had zero days to fix the flaw (in other words, a patch is not available).[0]

[0] http://en.wikipedia.org/wiki/Zero-day_attack


Note that parenthetical contradicts the "zero days to fix" definition. (No patch available is not zero days to fix "in other words".) That suggests the term as commonly understood is a bit fuzzy.

Personally, I've noticed more use of "zero-day" to mean "exploits are now public but no patch is yet available" than to mean literally "programmers just learned of the bug today".


Not sure where the downvoters are coming off - while, logically, I agree with most that "zero day" should mean the vendor has had zero days to patch the bug, in practice, I frequently see it to mean a vulnerability for which no vendor patch or workaround exists. I realize that is nonsensical, but it's a common usage. It's basically a short hand for saying, "There is no way to defend yourself against this other than completely shutting down or removing the subsystem."


No, the measurement of days has to do with the time the vendor has been aware of the vulnerability


What if the vendor already knew? If Apple reveals 2 years from now that they had discovered it on their own 15 days before Google reported the exploit, does it become a 90 + 15 = 105 day, retroactively?


Yes, that would be 105 day. And it would show Apple in bad light, because they had 105 days to fix it and still did not do it, just like this vulnerability is 90-day.

0-day basically means the that the vendor learned about the vulnerability the same day everyone else did, it should not be used in situation when vendor was notified promptly, yet still ignored it and didn't fixed it. I don't understand so many people have problem with this.


Because it becomes a definition that's excruciatingly precise, but useless to almost everyone in the world.

I'm probably not the programmer responsible for fixing a bug in my OS; hardly any of us are. But we're all at the mercy of that bug being fixed. So aside from PR, there's literally no reason why I should care how long the vendor has known about it. I care how long everyone else has known about it prior to a fix being available.

If there's going to a be a widely-used term for one or the other, language is going to evolve such that the term covers the latter case because we have practically no reason to care about or refer to the former.

I would also argue that you should choose a word to describe such serious flaws in such a way that the "flaw" doesn't appear to go away if nothing changes except the passage of a very small amount of time. I don't want vendors saying, "we have no zero-day exploits" simply because they waited 10 hours to make the statement.


And I would like to add that's the reason why you should use software made in your own timezone. For all others use Australian software.


As a term, it's prone to confusion. Many use 'zero-day' to refer to vulnerabilities that are publicly-known before a patch is available. That is, the figurative 'zero day' counter starts from public disclosure, not disclosure-to-vendor.

Because bugs can't be fixed in zero time, and most readers are interested in the practical matter of whether there is a window of unpatched widespread vulnerability, this public-disclosure-focused definition may in fact be more useful.


> vulnerabilities that are publicly-known before a patch is available

We already have a term for that, though: "unpatched vulnerabilities". Using the term "zero-day" erroneously communicates to the reader that Google surprised Apple et al with this thing out of the blue, and just handed crackers the tools to start exploiting it without giving Apple time to develop countermeasures.


Indeed, but it was also an 'unpatched vulnerability' for the last 90 days. And, it was even plausibly an 'unpatched vulnerability' before it was discovered!

What's special about today, with an exploit being widely-known but no patch available? That's the new situation people are grasping for a term to describe.

'Zero-day' somewhat fits, at least from the perspective of everyone outside Apple: OSX users and those potentially attacking them. It's a brand-new green-field risk/attack for them!

It's also somewhat problematic, for the reasons you list and others. There may be a better term yet to be discovered, that maintains the special distinction for vulnerabilities that the vendor only learns about simultaneous with exploit-availability. But still a lot of people use and understand the term to apply to the window-of-danger from unpatched "Project Zero" 90-day reveals.


> That is, the figurative 'zero day' counter starts from public disclosure, not disclosure-to-vendor.

And which derives from its original usage in the late 1990s warez scene. A 0-day licensing crack was one available on the same day a new product hit the market. Many reputations were built on achieving this.


I don't think 'zero-day' has anything to do with public knowledge. A hacker can find a vulnerability and exploit it without publicly revealing it and it would be still a zero-day, because the vendor had zero days to fix it.


It's a stupid term. For the warez scene it makes sense, because a 0 day license exploit has the hallmark of having been available on day 0 for all time.

Using 0 day to refer to "days since disclosure to vendor", well, it becomes a 1-day on the very next day. Which means you can only use the term on one day out of all time.

If we used 0 day to refer to "days since release", as in "iOS 8 was released today and there's already an exploit" then that would be meaningful.


I think you are confused with "zero-day exploit". They are calling it a "zero-day vulnerability," but i'm not sure what that even means.


How is it possible in 2015 that major kernels are vulnerable to arbitrary code execution via null pointer dereference?

Leave aside memory safety, input validation, actually caring about the quality of your work, whatever. mmap_min_addr stopped a ton of attacks back in like 2007-10 when Linux had a local privilege escalation-of-the-month (and RHEL, which hadn't enabled it yet for backwards-compatibility concerns, got hit more frequently). It's not a particularly aesthetically-pleasing solution, but it's an effective one.

Are there legacy apps on OS X that require mapping the zero page? Executable? That Apple cares about supporting?

I've run into two things on Linux that need to map the zero page: versions of MIT Scheme from before 2009 (because the compiler was doing something super weird), and Wine, when running certain DOS / Windows 3.1 apps. Anything of that era probably stopped working on OS X when they killed Classic. Even Carbon has been dead for three years.

https://wiki.debian.org/mmap_min_addr

https://access.redhat.com/articles/20484


How is it possible in 2015 that major kernels are vulnerable to arbitrary code execution via null pointer dereference?

Because they're monolithic, bloated, and written in C.

(Rust guys, please don't screw up. We need a win there.)


> (Rust guys, please don't screw up. We need a win there.)

While it's true that Rust would help here, it is very unlikely that a Rust kernel project would get as far as e.g. Linux and let alone replace it.

Of course, rewriting and existing kernel stepwise would be interesting, if possible.


> Of course, rewriting and existing kernel stepwise would be interesting, if possible.

Perhaps it's possible to write a new kernel in Rust, and have it be backwards compatible with Linux, by "wrapping" the Linux kernel and drivers in sandboxes?

So a Rust kernel with some kind of built-in environment isolation, in which it can run the real Linux kernel. The running Linux kernel would then access physical hardware through a wrapper in the Rust kernel, while the Rust kernel would access hardware directly.

That's really the only way I see this project gaining widespread adoption: by leveraging Linux. Linux simply has too much momentum to be replaced with something non-compatible.

Of course, a Rust kernel could be useful for all kinds of things other than replacing Linux. Like a Mirage OS-type kernel that uses Rust to write drivers in Rust (instead of OCaml).


This is vaguely the idea behind Joanna Rutkowska and co.'s Qubes: there are lots of Linux kernels in different Xen domains, one for each class of userspace apps (banking, gaming, etc.) and one for each low-level service (networking, graphics, etc.). For instance, if the Linux kernel is vulnerable to a local privilege escalation, and the DHCP client is vulnerable to arbitrary code execution, on a traditional system, anyone on the other side of your Ethernet cable can root your machine. Under Qubes, all they can do is root the Xen domain that's providing networking -- but that's not significantly more power than they had than by being on the other end of your network cable, since that domain does nothing other than networking.

https://qubes-os.org/

I'm not convinced that Linux is unkillable, though. This thread is about OS X, I'm typing this on an OS X machine, etc. I suspect that if you do a good job of working with people's hardware (Apple has an advantage, of course), you can run Chrome, and you can run anything that's portable between OS X and Linux, you can get pretty far.


I don't see that happening with Linux, though.

Linux as a UNIX clone, will never use anything else other than C.

Replacing C in MirageOS sounds more likely.


Wouldn't that essentially be an hypervisor written in Rust?


That's what I'm thinking. Take a look at http://spin.atomicobject.com/2014/09/27/reimagining-operatin...

for an overview of what's going on in this area. If you're running containers on a hypervisor, with files on storage servers elsewhere, most of the Linux kernel is dead weight. Most of the kernel can be replaced by a modest glue library. Here's one, written in OCaml: http://anil.recoil.org/papers/2013-asplos-mirage.pdf

As containers catch on, we'll see more systems specialized to run nothing but containers. They will be much simpler than Linux or Windows. System administration will be external, as it is for cloud systems like Amazon AWS now.


Unfortunately Rust itself has limited space to secure a kernel. We need a major change in operating systems to have a safer environment that is not trivial to exploit.

My list goes as:

- kernel memory and userspace memory is hardware separated as it was originally with some early architectures

More about it here: http://phrack.org/issues/64/6.html#article

- microkernels

Having a formally verified ring0 execution block and everything else is running in a less privileged space, this would allow kernels to protect themselves against bad drivers and other parts of the operating system that can be used as attack surface today

- safe userland

this is I guess where Rust could help the most, having a memory safe low level language


Severals attempts have been made in secure OS, the major problem is the prevalence of UNIX monoculture in the mainstream desktop and server environments, that has came to be.

UNIX is married with C, so any attempt to replace C, means breaking with the UNIX mindset, which has proven very hard to do.

Even successful research projects like Oberon, failed to cater the industry, and this was before UNIX became widespread.

Regarding microkernels, most of the embedded space OS are actually using microkernels.

I am following MirageOS, HaLVM and Microsoft's Ironclad as possible safer OS.

This is why also also like what Apple, Google and Microsoft do on their mobile OS to reduce the amount of allowed unsafe code. Even though it isn't at kernel level.


> UNIX is married with C, so any attempt to replace C, means breaking with the UNIX mindset, which has proven very hard to do.

I don't believe this is particularly true, although I haven't explored it very far. UNIX is married with the API/ABI exposed by the so-called "C" library. So far, there have been no languages that offer native interoperability with C and have gained significant popularity, other than C++, and C++ is certainly pretty popular on UNIX (Qt, gcc, etc.). Rust does offer that and is (evidently) picking up a ton of steam, and you can get to the point where stupid tricks that previously required a C derivative can be done in Rust.

http://mainisusuallyafunction.blogspot.com/2015/01/151-byte-...

In fairness, this often also requires things like "C" strings, the "C" locale, etc. But those things can either be done smoothly enough from Rust, or are wrong anyway, that I think there's a chance to break the C stranglehold here.


You are seeing it only from the technical point of view.

UNIX and C were developed together. Like most system programming languages before it, and even those that failed in the market, C's original purpose was to bring its host OS to life.

So to remove C from a UNIX compatible OS, you need to remove C from UNIX culture, which is impossible.

The resulting OS wouldn't be UNIX any longer, it would be Plan9, Inferno, something else.

As for C++, is pretty popular nowadays because it also came from AT&T, so it has been part of the UNIX culture from the mid-80's. But never at kernel level.

There are OS written in C++, like Symbian, BeOS, IBM i and others. None of them are UNIX compatible OSs.

I cannot imagine any commercial UNIX vendor to allow anything else other than C on their kernels, nor I do see it happen in the FOSS world.

Alternative OS that try to research new paths in OS architectures, yes. But not OS that try to clone UNIX culture.

If you look at my comment history, I am not very found of C, but I just don't see it happen from the social point of view of how comunities behave.


There's no requirement for Linux to follow UNIX culture. People are already arguing that Fedora, Arch, etc. aren't UNIX in terms of culture and philosophy, and the variances are increasingly Linux-specific.

https://pappp.net/?p=969

(And I'm personally not a fan of UNIX culture, at least in 2015, partly _because_ it's a culture that thinks C is a defensible language to program in, at least in 2015.)


That much is true, but it is still thrown around at every oportunity, just look at the whole systemd discussion.

> And I'm personally not a fan of UNIX culture

Personally, I have learn a lot from UNIX and it allowed me quite a few interesting job opportunities in my career.

But I was also part of Amiga, BeOS, Windows, Demoscene, Oberon cultures, so in the end I enjoy systems that try new paths.

And even though I don't agree with Rob Pike ideas for Go, I surely agree with his opinion about UNIX.


Big chunks of OSX are written in C++ (namely, the IOKit) and it seems to have pretty successfully exposes a UNIX-like interface.


While true, Mac OS X is not a classical UNIX system in terms of culture.


I think we need hardware support first before we can think of any language. The support has to be transparent to any other part of the system. On the top of that the language that you are using has to be memory safe. This is probably where C falls off.

Yes, microkernels and hybrid kernels too. Yet Linux/FreeBSD/Random Unix Clone are monolithic.

It is the time for a new safer kernel! :)

http://en.wikipedia.org/wiki/Hybrid_kernel#NT_kernel


I'm setting up a system with SeL4.

None of the systems you're suggesting have formal proofs of correctness, and this one not only has such proofs, but is written in C.


What is the process for patching seL4? How does e.g. adding a parameter to an existing function affect the proof, and what's the process for proving additional predicates?


The proofs are publicly available, though I can imagine there would be significant... overhead involved in adding functions to the system.


Such naivety.


>> Even Carbon has been dead for three years. <<<

With regard to only this point, probably half if not more of the the top 15 grossing games on the Mac AppStore Games page today are either Carbon Apps or at minimum require the Carbon framework.


And a bunch of those are using Wine, which requires mapping code at the zero page, so they're compiled with a special load command that allows them to map the zero page.

On Yosemite, 64-bit binaries aren't allowed to do this.


Whoa, seriously? I didn't realize Wine for OS X was real enough to be usable for anything, let alone be admitted to the app store and make it to the top. That's pretty awesome.

Although every time I go "I didn't realize X on Y worked", it seems like games are the rationale, and not very surprisingly, since they exercise a relatively small part of the API surface apart from OpenGL. (Mono, some HTML 5 things as mobile apps, and Humble Bundle's asm.js collection all come to mind.)

Can OS X take a cue from iOS and require a special Apple-signed entitlement to do this (unless root overrides it in a config file), or is it not worth the trouble?

Also, does Wine actually require this in general? My understanding was that Wine needed this for legacy Windows apps that themselves map the zero page, but recent-ish apps designed for XP or later shouldn't do that, right?


Wine on OS X has been stable enough for gaming for years! Sims 3 used it, as does EVE Online, and quite a few other major titles, mostly those from EA. They all use TransGaming's Cider fork though, I'm not sure if TransGaming every contribute back to the Wine community these days.

CodeWeaver's Crossover is quite good as well. It can play Skyrim on my 2012 MPB with decent quality. I mostly use it for older games though - rollercoaster tycoon and the like.


https://code.google.com/p/google-security-research/issues/de...

That was a kernel null dereference, it really has nothing to do with creating a mach-o that puts code in the first page and there are valid reasons to do it, SheepShaver and BasilliskII come to mind.


Right, the null pointer gets dereferenced in the kernel, but (assuming I'm understanding the report right) the means of exploitation is to create a userspace memory mapping with something mapped at address zero, and cause the kernel to read from that address / run code at that address instead of faulting when it tries to dereference NULL. The PoC code is userspace and does this:

         //map NULL page
         vm_deallocate(mach_task_self(), 0x0, 0x1000);
         addr = 0;
         vm_allocate(mach_task_self(), &addr, 0x1000, 0);
         char* np = 0;
         for (int i = 0; i < 0x1000; i++){
           np[i] = 'A';
         }
Am I misinterpreting this?

Re SheepShaver and Basilisk II, those are non-Mac apps, and at least on Linux, there's no requirement for an emulator (like qemu) to map address zero in the host to offer a usable zero page in the guest; it's a convenience depending on how you write the emulator, but it's by no means needed. I don't think this is true of other host platforms either, but I'm less familiar with those.


Could this be solved by writing an operating system in Rust?


Yes, mostly, although that's a fairly big undertaking, and any code you have that uses unsafe blocks, or legacy C code (think proprietary drivers), or the like is going to be a weakness. mmap_min_addr is a mitigation measure, by saying, OK maybe we'll dereference NULL sometimes, but we're going to make totally sure that there's nothing mapped there, and certainly nothing that userspace put there. So if we do dereference null, it's just a crash, not execution of userspace-provided code in kernelspace.

And it's the sort of thing that you can turn on in an existing kernel. Rewriting the whole kernel in Rust is a major (but worthwhile) project; adding this restriction is, in its simplest form, just adding `if (addr < 4096) return -EFAULT` to the page-allocation routine. As another commenter pointed out, OS X does have this restriction for 64-bit binaries now.


I will be curious, as a litmus test, to see how perception of this varies in the community to the Microsoft release.

That was a mixed bag - with support on both sides of the aisles, some decrying Patch Tuesday, some in support.

What will happen here? Will Google be lauded for consistency, or castigated for not "working more closely" with Apple (for varying definitions of "working closely", up to and including potentially delaying disclosure around arbitrary timelines)?


I'm wholly in support of Google here. Apple has a history of letting known vulnerabilities persist for years (ie, http://krebsonsecurity.com/2011/11/apple-took-3-years-to-fix...) without fixes.


Google is exploiting MS and Apple update release processes.

Both were published days before update schedule.


That's an interesting counterpoint, and one I hadn't considered. Definitely worthy of note (especially when you combine it with the 'zero Android vulnerabilities found' - which isn't to imply it's an actively nefarious project, but that everything should be taken with an objective eye).


This is not true and was disavowed elsewhere. Wish I had the link.



I have a serious question.

I understand the benefit of this program to the public, but what real benefit does funding Project Zero Day provide Google?


I can think of a few:

1. Google uses Mac OS and Windows internally, they're a big enough company that even untargeted malware that hits a small percentage of systems costs them money (not to mention targeted malware), and they're also a big enough company that the amortized cost of Project Zero, across all employees, is affordable.

2. The top-notch researchers who find vulnerabilities in their server stack (Linux, etc.) also like looking at different OSes occasionally, and this is a way of recruiting and keeping them at Google, which leads to them disclosing and fixing Linux security issues internally while waiting on an external response.

3. They're unhappy with their software vendors not taking security and vulnerability response seriously, and they're trying to shift the popular consensus / Overton window on how seriously people should care and what the industry-standard response to reports should be.


"Don't be evil" means that sometimes you should do things for the good of the community without expectation of direct rewards. Maybe I'm idealistic and naive but I think that Google tries to live up to that.


Were they following that when they closed all < 4.4 Android bugs?


Maybe it's their way of fighting the Man. They're compelled, almost by force, to cooperate with the US and other governments in spying on their users. This seems like a way to offset that ethical load by taking away some of the tools those bad actors use as weapons. I know I'd do it if I were in their shoes.


They are The Man.


Google's ads are so pervasive that they make money for just about every minute you use the internet. If your computer is infested with viruses you might turn it off and go outside, therefore it's in Google's best interests to invest in general computer security.


Google makes money off of ads.

A common monetization method used by malware distributors is to secretly replace legitimate ads that get loaded on the machine (such as Google's) with their own.

Doing this on every single web page visited by that machine for the rest of its existence, and multiplying that by hundreds of thousands of other infected machines out there, and multiplying THAT by the dozens of different black hat groups doing this simultaneously, it adds up to a lot of lost money for Google.


I understand the ad thing but that idea applies to all software companies (because they want their own browsers and operating systems used by users), not just Google.

To me it feels like they're throwing their competitors under the bus in this way as opposed to running slander campaigns or witty commercials like Samsung, Microsoft, and Apple do. I'm not saying this is their intention but that is the way it comes off and it cannot be good for their reputation.

A non-profit organization funded by the entire tech industry would have more credibility.


They would fix any doubt if they began releasing bugs and exploits for Android as well.


Better that they fix their own vulnerabilities than release them.

They do offer a bug bounty for their own products, to encourage third parties to do to them exactly what they're doing to Apple and Microsoft.


It's not like they sit around and say: "Oh, f*ck Android, let's find vulnerabilities in other operating systems." I suppose it's long since they have a team working on Android vulnerabilities, but it's not trivial fixing, deploying - not to mention finding the flaws.


When it comes to others they have total disregard as to how difficult to deploy any fix is. The same standard should apply to themselves, but clearly doesn't.


For a start, they could try to fix vulnerabilities in Jelly Bean's webview[1] and urge manufacturers and carriers to push updates to their phones, since 46% of the market is apparently still Jelly Bean[2]

Of course, it's an uphill battle with the manufacturers/carriers. But as long as Google is not applying fixes to Jelly Bean, the manufacturers can always blame Google.

[1] http://www.androidpolice.com/2015/01/23/google-issues-offici...

[2] https://developer.android.com/about/dashboards/index.html


The Android security lead has posted a statement on this issue, explaining what is and is not being done, and why: https://plus.google.com/u/0/+AdrianLudwig/posts/1md7ruEwBLF


Most of these Apple computers are owned by Google users.


And a large percentage of Google devs are Apple users.

Edit: for the apparent doubter of this, see any number of articles about Google's 40,000 macs that they use as dev machines, for example: https://www.usenix.org/conference/lisa13/managing-macs-googl... http://www.theregister.co.uk/2013/11/27/google_mac_support/


Google Engineers mostly use their beefy workstations with a Google-flavored Ubuntu installed to code. Most engineers have Mac laptops though (at least my team), even though their use is quite limited (you cannot store code there, so mostly you have to ssh or remote desktop to your workstation)

Source: worked there once


That is generally true, but not completely so. A number of android engineers develop directly on Mac, and all iOS app developers do so because there isn't an alternative. And there are some weird engineers like me who mount their linux disks on their macs via nfs and only use their linux boxes to do command line work (builds, running servers, etc.).


Did you use an IDE via X11 forwarding? Or did you just use Vim / Emacs?


Usually NX or the like.


Chromoting is more common now. Same idea, though.

(I'm a Googler)


NX is not an IDE/editor.


good PR?

Not everything has to be directly linked to selling ads, Google is a group of people. Some companies plant trees, Google has this, I guess.


Project Zero appears to be generating mostly negative PR for Google (and MS and Apple). From a wider public perception standpoint, it's almost lose-lose all around.


Well, that's one perspective. There are a lot of people who believe that disclosure like what Google is practicing will ultimately increase the rate at which bugs are fixed and decrease the frequency of computers being compromised.


I didn't say it was a bad thing, but the PR they're getting is independent of whether I like what they're doing.


They get a valid excuse for reverse engineering their competitors products so they can copy their technology.


Sometimes is difficult to catch the sarcasm, because it is sarcastic, isn't?


Sidenote: please do not publish re-posts, reference the original source whenever possible: http://www.zdnet.com/article/googles-project-zero-reveals-th...


Looks like the latest 10.10.2 beta has them fixed. Users with access to public beta program should see the update.

http://www.imore.com/latest-os-x-10102-beta-kills-google-dis...


> Google's Project Zero security team revealed the existence this week of three vulnerabilities with high severity that have yet to be fixed in Apple's OS X operating system.

I feel like somebody is trying to execute some kind of buffer overflow attack against my brain.


Is it 0day or is it not. "0day" is not an important word except that the media has started using it. For those of us working in the field, it's just a fun term, not a legal term. It is used to distinguish the risk of a bug, not the ethical handling of it. It has always been in relation to risk. The risk is greatest when it is a. unknown to anyone but a few b. unknown by a wide enough audience to effect its application c. unfixable or as yet unpatched. Some in the field call bugs fitting any one of these qualification 0days, other's not. But nearly no one in the field uses this term to quantify the ethical handling of a bug (as in: age starting from moment vendor knows about bug).

Consider what a 1day is. A "1day" bug is one that looses value because it is likely that some systems have now been patched such that you as an attacker are not guaranteed that every system you touch with your exploit will fall. A 365day is likely useless, depending on the target platform. Hence, the entirety of its meaning has to do with risk, not procedure or ethics.

It is fine if you read Wikipedia's definition and decide the meaning is otherwise. After all, language changes. But if anyone in the future wants to understand the etymology of the meaning behind "0day" they should consider that the meaning appears to have changed (in the minds of many that actually don't even work in the field of reverse engineering).


From what I can tell one of the exploits allows for code execution with root privileges, another for accessing kernel-space memory and the third will crash the machine. Are any of these things not already doable when someone has physical access to the machine?


They aren't remote code execution exploits as best as I can tell. But, it's a short leap from an exploitable root escalation to total compromise of a machine. Until these are patched, any executable you download and run could potentially be a dropper for much nastier stuff. You could combine one of these with the recently-disclosed Flash exploits, for example, and you have a drive-by root exploit ready for deployment via ad networks to millions of people.


They're not doable from random programs downloaded from the Internet, except via these flaws.


Does anyone know if they only research os and os environment bugs or also cisco juniper os bugs as well for example?


This is from the Project Zero announcement post:

"We're not placing any particular bounds on this project and will work to improve the security of any software depended upon by large numbers of people, paying careful attention to the techniques, targets and motivations of attackers."

http://googleonlinesecurity.blogspot.com/2014/07/announcing-...


I also wonder if they publish proof of concepts for security bugs they refuse to fix (e.g. in Android 4.3).


They said they also investigate on Android, but they haven't yet published a single vulnerability for it, as far as I can tell.


Right on their blog, though it's a guest post but they don't seem to have anything against it: http://googleprojectzero.blogspot.com/2015/01/exploiting-nvm...


Yes, it's a guest post. What I said is that, while in theory they said they will research all major OSs (including Android, which is by far the most common mobile OS), they have yet to publish a single vulnerability as a result of their research.


Maybe they do publish them but they are all fixed before the 90 windows passes so they don't have to be disclosed their. You still go to git.chromium.org and aosp's git see the daily security/improvement patches that go there.


The disclosure window is typically for fixes released to end-users, not just fixed by vendors. This is also what was enforced with MS last round, where they had a fix ready but couldn't get it to pass QA and being released before the 90-days window expired.


These bugs appear to be for Yosemite, so they might only look at bugs present in the latest version of the software.


I doubt they're wasting time with bugs in OS X 10.5 either.


Was the security bug discovered by Project Zero team?


https://code.google.com/p/google-security-research/issues/li...

They haven't published Cisco bugs, but they do have a pretty broad scope.


I don't get why Apple wouldn't be motivated to fix this ASAP for their own safety.


It’s coming soon and is already available to testers: http://www.imore.com/latest-os-x-10102-beta-kills-google-dis...


Safety from what? The devs responsible probably believe (quite reasonably) that if there's untrusted code running local root exploits on their machine, they've already lost most of the battle.


Does anyone know how far back the bugs go? Mavericks, Snow Leopard?


I have a huge ego and like more money. Is HackerOne the best way to feed both?


I'm assuming they communicate these privately to the vulnerable party prior to publicizing?


FTFA: "The vulnerabilities were reported to Apple back in October but the flaws have not been fixed. After 90 days, details of vulnerabilities found by Project Zero are automatically released to the public -- which is what happened this week."


Yes. Google releases it after a 90-day grace period. So Apple has known about it for a while.


They gave them 90 days to publish a fix.


And that fix in the latest beta. So it is coming.


Yes, they do, 90 days before disclosing them


It seems that someone is pissed off




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: